Versatile Moving AverageThe Versatile Moving Average (VMA) is a comprehensive, all-in-one tool for trend analysis. It is designed to act as a central hub for advanced MA calculations by combining a wide selection of average types, calculation modes, and a multi-timeframe engine.
Key Features:
Comprehensive MA Selection: Provides a wide variety of moving average types (e.g., EMA, SMA, WMA, HMA, and their volume-weighted counterparts). Allows full customization of length, source, and offset.
Advanced Calculation Modes:
Volume Weighting: Optionally weights the selected MA calculation by volume, making it more responsive to market participation.
Normalization (Geometric Average): A key feature is the optional 'Normalize' mode. When enabled, the indicator calculates a Geometric Moving Average by averaging the logarithms of the source price. This measures the average compound growth rate, making it well-suited for analyzing assets with exponential price behavior.
Multi-Timeframe (MTF) Engine: The indicator includes an MTF conversion block. When a Higher Timeframe (HTF) is selected, advanced options become available: Fill Gaps handles data gaps, and Wait for timeframe to close prevents repainting by ensuring the indicator only updates when the HTF bar closes.
Integrated Alerts: Comes with built-in alerts for the source price crossing over or under the calculated VMA, allowing for timely notifications.
DISCLAIMER
For Informational/Educational Use Only: This indicator is provided for informational and educational purposes only. It does not constitute financial, investment, or trading advice, nor is it a recommendation to buy or sell any asset.
Use at Your Own Risk: All trading decisions you make based on the information or signals generated by this indicator are made solely at your.
No Guarantee of Performance: Past performance is not an indicator of future results. The author makes no guarantee regarding the accuracy of the signals or future profitability.
No Liability: The author shall not be held liable for any financial losses or damages incurred directly or indirectly from the use of this indicator.
Signals Are Not Recommendations: The alerts and visual signals (e.g., crossovers) generated by this tool are not direct recommendations to buy or sell. They are technical observations for your own analysis and consideration.
อินดิเคเตอร์และกลยุทธ์
Fear–Greed Index📈 Fear–Greed Index 
This indicator provides a sophisticated, multi-faceted measure of market sentiment, plotting it as an oscillator that ranges from -100 (Extreme Fear) to +100 (Extreme Greed).
Unlike standard indicators like RSI or MACD, this tool is built on principles from behavioral finance and social physics to model the complex psychology of the market. It does not use any of TradingView's built-in math functions and instead calculates everything from scratch.
 🤔 How It Works: The Three-Model Approach
 The final index is a comprehensive blend of three different academic models, each calculated across three distinct time horizons (Short, Mid, and Long) to capture sentiment at different scales.
 
 Prospect Theory (CPT): This model, based on Nobel Prize-winning work, evaluates how traders perceive gains and losses. It assumes that the pain of a loss is felt more strongly than the pleasure of an equal gain, modeling the market's asymmetric emotional response.
 Herding (Brock–Durlauf): This component measures the "follow the crowd" instinct. It analyzes the synchronization of positive and negative returns to determine if traders are acting in a coordinated, "herd-like" manner, which is a classic sign of building fear or greed.
 Social Impact Theory (SIT): This model assesses how social forces influence market participants. 
 
 It combines three factors:
 
 
 Strength (S): The magnitude of recent price moves (volatility).
 Immediacy (I): How recently the most significant price action occurred.
 Number (N): The level of market participation (volume).
 
The indicator calculates all three models for a Short, Mid, and Long lookback period. It then aggregates these nine components (3 models x 3 timeframes) using customizable weights to produce a single, final Fear–Greed Index value.
 Interpretar How to Read the Index
 
 
 Main Line: This is the final FGI score.
 Lime/Green: Indicates Greed (positive values).
 Red: Indicates Fear (negative values).
 Fading Color: The color becomes more transparent as the index approaches the '0' (Neutral) line, and more solid as it moves toward the extremes.
 
 Key Zones:
 
 
 +100 to +30 (Extreme Greed): The market is highly euphoric and potentially overbought. This can be a contrarian signal for caution or profit-taking.
 +30 to +18 (Greed Zone): Strong bullish sentiment.
 +18 to -18 (Neutral Zone): The market is undecided, or fear and greed are in balance.
 -18 to -30 (Fear Zone): Strong bearish sentiment.
 -30 to -100 (Extreme Fear): The market is in a state of panic and may be oversold. This can be a contrarian signal for potential buying opportunities.
 
 Reference Plots: The indicator also plots the aggregated scores for each of the three models (Herding, Prospect, and SIT) as faint, secondary lines. This allows you to see which component is driving the overall sentiment. 
 
 ⚙️ Settings & Customization
This indicator is highly tunable, allowing you to adjust its sensitivity and component makeup.
 
 Time Windows:
 
 
 Short window: Lookback period for short-term sentiment.
 Mid window: Lookback for medium-term sentiment.
 Long window: Lookback for long-term sentiment.
 
 Model Aggregation Weights:
 
 
 Weight CPT, Weight Herding, Weight SIT: Control how much each of the three behavioral models contributes to the final score (they should sum to 1.0).
 Cross-Horizon Weights:
 Weight Short, Weight Mid, Weight Long: Control the influence of each timeframe on the final score (they should also sum to 1.0).
 
LibVPrfLibrary   "LibVPrf" 
This library provides an object-oriented framework for volume
profile analysis in Pine Script®. It is built around the `VProf`
User-Defined Type (UDT), which encapsulates all data, settings,
and statistical metrics for a single profile, enabling stateful
analysis with on-demand calculations.
Key Features:
1.  **Object-Oriented Design (UDT):** The library is built around
the `VProf` UDT. This object encapsulates all profile data
and provides methods for its full lifecycle management,
including creation, cloning, clearing, and merging of profiles.
2.  **Volume Allocation (`AllotMode`):** Offers two methods for
allocating a bar's volume:
- **Classic:** Assigns the entire bar's volume to the close
price bucket.
- **PDF:** Distributes volume across the bar's range using a
statistical price distribution model from the `LibBrSt` library.
3.  **Buy/Sell Volume Splitting (`SplitMode`):** Provides methods
for classifying volume into buying and selling pressure:
- **Classic:** Classifies volume based on the bar's color (Close vs. Open).
- **Dynamic:** A specific model that analyzes candle structure
(body vs. wicks) and a short-term trend factor to
estimate the buy/sell share at each price level.
4.  **Statistical Analysis (On-Demand):** Offers a suite of
statistical metrics calculated using a "Lazy Evaluation"
pattern (computed only when requested via `get...` methods):
- **Central Tendency:** Point of Control (POC), VWAP, and Median.
- **Dispersion:** Value Area (VA) and Population Standard Deviation.
- **Shape:** Skewness and Excess Kurtosis.
- **Delta:** Cumulative Volume Delta, including its
historical high/low watermarks.
5.  **Structural Analysis:** Includes a parameter-free method
(`getSegments`) to decompose a profile into its fundamental
unimodal segments, allowing for modality detection (e.g.,
identifying bimodal profiles).
6.  **Dynamic Profile Management:**
- **Auto-Fitting:** Profiles set to `dynamic = true` will
automatically expand their price range to fit new data.
- **Manipulation:** The resolution, price range, and Value Area
of a dynamic profile can be changed at any time. This
triggers a resampling process that uses a **linear
interpolation model** to re-bucket existing volume.
- **Assumption:** Non-dynamic profiles are fixed and will throw
a `runtime.error` if `addBar` is called with data
outside their initial range.
7.  **Bucket-Level Access:** Provides getter methods for direct
iteration and analysis of the raw buy/sell volume and price
boundaries of each individual price bucket.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 create(buckets, rangeUp, rangeLo, dynamic, valueArea, allot, estimator, cdfSteps, split, trendLen) 
  Construct a new `VProf` object with fixed bucket count & range.
  Parameters:
     buckets (int) : series int        number of price buckets ≥ 1
     rangeUp (float) : series float      upper price bound (absolute)
     rangeLo (float) : series float      lower price bound (absolute)
     dynamic (bool) : series bool       Flag for dynamic adaption of profile ranges
     valueArea (int) : series int        Percentage of total volume to include in the Value Area (1..100)
     allot (series AllotMode) : series AllotMode  Allocation mode `classic` or `pdf`  (default `classic`)
     estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : series LibBrSt.PriceEst PDF model when `model == PDF`. (deflault = 'uniform')
     cdfSteps (int) : series int        even #sub-intervals for Simpson rule (default 20)
     split (series SplitMode) : series SplitMode  Buy/Sell determination (default `classic`)
     trendLen (int) : series int        Look‑back bars for trend factor (default 3)
  Returns: VProf             freshly initialised profile
 method clone(self) 
  Create a deep copy of the volume profile.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object to copy
  Returns: VProf  A new, independent copy of the profile
 method clear(self) 
  Reset all bucket tallies while keeping configuration intact.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  profile object
  Returns: VProf  cleared profile (chaining)
 method merge(self, srcABuy, srcASell, srcRangeUp, srcRangeLo, srcCvd, srcCvdHi, srcCvdLo) 
  Merges volume data from a source profile into the current profile.
If resizing is needed, it performs a high-fidelity re-bucketing of existing
volume using a linear interpolation model inferred from neighboring buckets,
preventing aliasing artifacts and ensuring accurate volume preservation.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         The target profile object to merge into.
     srcABuy (array) : array  The source profile's buy volume bucket array.
     srcASell (array) : array  The source profile's sell volume bucket array.
     srcRangeUp (float) : series float  The upper price bound of the source profile.
     srcRangeLo (float) : series float  The lower price bound of the source profile.
     srcCvd (float) : series float  The final Cumulative Volume Delta (CVD) value of the source profile.
     srcCvdHi (float) : series float  The historical high-water mark of the CVD from the source profile.
     srcCvdLo (float) : series float  The historical low-water mark of the CVD from the source profile.
  Returns: VProf         `self` (chaining), now containing the merged data.
 method addBar(self, offset) 
  Add current bar’s volume to the profile (call once per realtime bar).
classic mode: allocates all volume to the close bucket and classifies
by `close >= open`. PDF mode: distributes volume across buckets by the
estimator’s CDF mass. For `split = dynamic`, the buy/sell share per
price is computed via context-driven piecewise s(u).
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     offset (int) : series int  To offset the calculated bar
  Returns: VProf       `self` (method chaining)
 method setBuckets(self, buckets) 
  Sets the number of buckets for the volume profile.
Behavior depends on the `isDynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing to a new resolution.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     buckets (int) : series int  The new number of buckets
  Returns: VProf       `self` (chaining)
 method setRanges(self, rangeUp, rangeLo) 
  Sets the price range for the volume profile.
Behavior depends on the `dynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing existing volume.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     rangeUp (float) : series float  The new upper price bound
     rangeLo (float) : series float  The new lower price bound
  Returns: VProf         `self` (chaining)
 method setValueArea(self, valueArea) 
  Set the percentage of volume for the Value Area. If the value
changes, the profile is finalized again.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     valueArea (int) : series int  The new Value Area percentage (0..100)
  Returns: VProf       `self` (chaining)
 method getBktBuyVol(self, idx) 
  Get Buy volume of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     idx (int) : series int    Bucket index
  Returns: series float  Buy volume ≥ 0
 method getBktSellVol(self, idx) 
  Get Sell volume of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     idx (int) : series int    Bucket index
  Returns: series float  Sell volume ≥ 0
 method getBktBnds(self, idx) 
  Get Bounds of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     idx (int) : series int  Bucket index
  Returns:  
up  series float  The upper price bound of the bucket.
lo  series float  The lower price bound of the bucket.
 method getPoc(self) 
  Get POC information.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object
  Returns:  
pocIndex  series int    The index of the Point of Control (POC) bucket.
pocPrice. series float  The mid-price of the Point of Control (POC) bucket.
 method getVA(self) 
  Get Value Area (VA) information.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object
  Returns:  
vaUpIndex  series int    The index of the upper bound bucket of the Value Area.
vaUpPrice  series float  The upper price bound of the Value Area.
vaLoIndex  series int    The index of the lower bound bucket of the Value Area.
vaLoPrice  series float  The lower price bound of the Value Area.
 method getMedian(self) 
  Get the profile's median price and its bucket index. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns:     
medianIndex  series int    The index of the bucket containing the Median.
medianPrice  series float  The Median price of the profile.
 method getVwap(self) 
  Get the profile's VWAP and its bucket index. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns:     
vwapIndex    series int    The index of the bucket containing the VWAP.
vwapPrice    series float  The Volume Weighted Average Price of the profile.
 method getStdDev(self) 
  Get the profile's volume-weighted standard deviation. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Standard deviation of the profile.
 method getSkewness(self) 
  Get the profile's skewness. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Skewness of the profile.
 method getKurtosis(self) 
  Get the profile's excess kurtosis. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Kurtosis of the profile.
 method getSegments(self) 
  Get the profile's fundamental unimodal segments. Calculates on-demand if stale.
Uses a parameter-free, pivot-based recursive algorithm.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf        The profile object.
  Returns: matrix  A 2-column matrix where each row is an   pair.
 method getCvd(self) 
  Cumulative Volume Delta (CVD) like metric over all buckets.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf      Profile object.
  Returns:  
cvd    series float  The final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi  series float  The running high-water mark of the CVD as volume was added.
cvdLo  series float  The running low-water mark of the CVD as volume was added.
 VProf 
  VProf  Bucketed Buy/Sell volume profile plus meta information.
  Fields:
     buckets (series int) : int              Number of price buckets (granularity ≥1)
     rangeUp (series float) : float            Upper price range (absolute)
     rangeLo (series float) : float            Lower price range (absolute)
     dynamic (series bool) : bool             Flag for dynamic adaption of profile ranges
     valueArea (series int) : int              Percentage of total volume to include in the Value Area (1..100)
     allot (series AllotMode) : AllotMode        Allocation mode `classic` or `pdf`
     estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : LibBrSt.PriceEst Price density model when  `model == PDF`
     cdfSteps (series int) : int              Simpson integration resolution (even ≥2)
     split (series SplitMode) : SplitMode        Buy/Sell split strategy per bar
     trendLen (series int) : int              Look‑back length for trend factor (≥1)
     maxBkt (series int) : int              User-defined number of buckets (unclamped)
     aBuy (array) : array     Buy volume per bucket
     aSell (array) : array     Sell volume per bucket
     cvd (series float) : float            Final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
     cvdHi (series float) : float            Running high-water mark of the CVD as volume was added.
     cvdLo (series float) : float            Running low-water mark of the CVD as volume was added.
     poc (series int) : int              Index of max‑volume bucket (POC). Is `na` until calculated.
     vaUp (series int) : int              Index of upper Value‑Area bound. Is `na` until calculated.
     vaLo (series int) : int              Index of lower value‑Area bound. Is `na` until calculated.
     median (series float) : float            Median price of the volume distribution. Is `na` until calculated.
     vwap (series float) : float            Profile VWAP (Volume Weighted Average Price). Is `na` until calculated.
     stdDev (series float) : float            Standard Deviation of volume around the VWAP. Is `na` until calculated.
     skewness (series float) : float            Skewness of the volume distribution. Is `na` until calculated.
     kurtosis (series float) : float            Excess Kurtosis of the volume distribution. Is `na` until calculated.
     segments (matrix) : matrix      A 2-column matrix where each row is an   pair. Is `na` until calculated.
Money Volume • Buyers vs Sellers — @tgambinoxThis indicator estimates the total amount of money traded (Volume × Price)
and splits it between buyers and sellers based on each candle’s behavior.
It displays green bars for buyers and orange bars for sellers, allowing you to see
which side of the market is concentrating the capital.
Useful for detecting flow imbalances, buying/selling pressure,
and confirming price moves alongside total monetary volume (blue line).
LibBrStLibrary   "LibBrSt" 
This is a library for quantitative analysis, designed to estimate
the statistical properties of price movements *within* a single
OHLC bar, without requiring access to tick data. It provides a
suite of estimators based on various statistical and econometric
models, allowing for analysis of intra-bar volatility and
price distribution.
Key Capabilities:
1.  **Price Distribution Models (`PriceEst`):** Provides a selection
of estimators that model intra-bar price action as a probability
distribution over the   range. This allows for the
calculation of the intra-bar mean (`priceMean`) and standard
deviation (`priceStdDev`) in absolute price units. Models include:
- **Symmetric Models:** `uniform`, `triangular`, `arcsine`,
`betaSym`, and `t4Sym` (Student-t with fat tails).
- **Skewed Models:** `betaSkew` and `t4Skew`, which adjust
their shape based on the Open/Close position.
- **Model Assumptions:** The skewed models rely on specific
internal constants. `betaSkew` uses a fixed concentration
parameter (`BETA_SKEW_CONCENTRATION = 4.0`), and `t4Sym`/`t4Skew`
use a heuristic scaling factor (`T4_SHAPE_FACTOR`)
to map the distribution.
2.  **Econometric Log-Return Estimators (`LogEst`):** Includes a set of
econometric estimators for calculating the volatility (`logStdDev`)
and drift (`logMean`) of logarithmic returns within a single bar.
These are unit-less measures. Models include:
- **Parkinson (1980):** A High-Low range estimator.
- **Garman-Klass (1980):** An OHLC-based estimator.
- **Rogers-Satchell (1991):** An OHLC estimator that accounts
for non-zero drift.
3.  **Distribution Analysis (PDF/CDF):** Provides functions to work
with the Probability Density Function (`pricePdf`) and
Cumulative Distribution Function (`priceCdf`) of the
chosen price model.
- **Note on `priceCdf`:** This function uses analytical (exact)
calculations for the `uniform`, `triangular`, and `arcsine`
models. For all other models (e.g., `betaSkew`, `t4Skew`),
it uses **numerical integration (Simpson's rule)** as
an approximation of the cumulative probability.
4.  **Mathematical Functions:** The library's Beta distribution
models (`betaSym`, `betaSkew`) are supported by an internal
implementation of the natural log-gamma function, which is
based on the Lanczos approximation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 priceStdDev(estimator, offset) 
  Estimates **σ̂** (standard deviation) *in price units* for the current
bar, according to the chosen `PriceEst` distribution assumption.
  Parameters:
     estimator (series PriceEst) : series PriceEst  Distribution assumption (see enum).
     offset (int) : series int       To offset the calculated bar
  Returns: series float     σ̂ ≥ 0 ; `na` if undefined (e.g. zero range).
 priceMean(estimator, offset) 
  Estimates **μ̂** (mean price) for the chosen `PriceEst` within the
current bar.
  Parameters:
     estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
     offset (int) : series int    To offset the calculated bar
  Returns: series float  μ̂ in price units.
 pricePdf(estimator, price, offset) 
  Probability-density under the chosen `PriceEst` model.
**Returns 0** when `p` is outside the current bar’s  .
  Parameters:
     estimator (series PriceEst) : series PriceEst  Distribution assumption (see enum).
     price (float) : series float  Price level to evaluate.
     offset (int) : series int    To offset the calculated bar
  Returns: series float  Density value.
 priceCdf(estimator, upper, lower, steps, offset) 
  Cumulative probability **between** `upper` and `lower` under
the chosen `PriceEst` model. Outside-bar regions contribute zero.
Uses a fast, analytical calculation for Uniform, Triangular, and
Arcsine distributions, and defaults to numerical integration
(Simpson's rule) for more complex models.
  Parameters:
     estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
     upper (float) : series float  Upper Integration Boundary.
     lower (float) : series float  Lower Integration Boundary.
     steps (int) : series int    # of sub-intervals for numerical integration (if used).
     offset (int) : series int    To offset the calculated bar.
  Returns: series float  Probability mass ∈  .
 logStdDev(estimator, offset) 
  Estimates **σ̂** (standard deviation) of *log-returns* for the current bar.
  Parameters:
     estimator (series LogEst) : series LogEst  Distribution assumption (see enum).
     offset (int) : series int     To offset the calculated bar
  Returns: series float   σ̂ (unit-less); `na` if undefined.
 logMean(estimator, offset) 
  Estimates μ̂ (mean log-return / drift) for the chosen `LogEst`.
The returned value is consistent with the assumptions of the
selected volatility estimator.
  Parameters:
     estimator (series LogEst) : series LogEst  Distribution assumption (see enum).
     offset (int) : series int     To offset the calculated bar
  Returns: series float   μ̂ (unit-less log-return).
Scientific Correlation Testing FrameworkScientific Correlation Testing Framework - Comprehensive Guide
Introduction to Correlation Analysis
What is Correlation?
Correlation is a statistical measure that describes the degree to which two assets move in relation to each other. Think of it like measuring how closely two dancers move together on a dance floor.
Perfect Positive Correlation (+1.0): Both dancers move in perfect sync, same direction, same speed
Perfect Negative Correlation (-1.0): Both dancers move in perfect sync but in opposite directions
Zero Correlation (0): The dancers move completely independently of each other
In financial markets, correlation helps us understand relationships between different assets, which is crucial for:
Portfolio diversification
Risk management
Pairs trading strategies
Hedging positions
Market analysis
Why This Script is Special
This script goes beyond simple correlation calculations by providing:
Two different correlation methods (Pearson and Spearman)
Statistical significance testing to ensure results are meaningful
Rolling correlation analysis to track how relationships change over time
Visual representation for easy interpretation
Comprehensive statistics table with detailed metrics
Deep Dive into the Script's Components
1. Input Parameters Explained-
Symbol Selection:
This allows you to select the second asset to compare with the chart's primary asset
Default is Apple (NASDAQ:AAPL), but you can change this to any symbol
Example: If you're viewing a Bitcoin chart, you might set this to "NASDAQ:TSLA" to see if Bitcoin and Tesla are correlated
Correlation Window (60): This is the number of periods used to calculate the main correlation
Larger values (e.g., 100-500) provide more stable, long-term correlation measures
Smaller values (e.g., 10-50) are more responsive to recent price movements
60 is a good balance for most daily charts (about 3 months of trading days)
Rolling Correlation Window (20): A shorter window to detect recent changes in correlation
This helps identify when the relationship between assets is strengthening or weakening
Default of 20 is roughly one month of trading days
Return Type: This determines how price changes are calculated
Simple Returns: (Today's Price - Yesterday's Price) / Yesterday's Price
Easy to understand: "The asset went up 2% today"
Log Returns: Natural logarithm of (Today's Price / Yesterday's Price)
More mathematically elegant for statistical analysis
Better for time-additive properties (returns over multiple periods)
Less sensitive to extreme values.
Confidence Level (95%): This determines how certain we want to be about our results
95% confidence means we accept a 5% chance of being wrong (false positive)
Higher confidence (e.g., 99%) makes the test more strict
Lower confidence (e.g., 90%) makes the test more lenient
95% is the standard in most scientific research
Show Statistical Significance: When enabled, the script will test if the correlation is statistically significant or just due to random chance.
Display options control what you see on the chart:
Show Pearson/Spearman/Rolling Correlation: Toggle each correlation type on/off
Show Scatter Plot: Displays a scatter plot of returns (limited to recent points to avoid performance issues)
Show Statistical Tests: Enables the detailed statistics table
Table Text Size: Adjusts the size of text in the statistics table
2.Functions explained-
calcReturns():
This function calculates price returns based on your selected method:
Log Returns:
Formula: ln(Price_t / Price_t-1)
Example: If a stock goes from $100 to $101, the log return is ln(101/100) = ln(1.01) ≈ 0.00995 or 0.995%
Benefits: More symmetric, time-additive, and better for statistical modeling
Simple Returns:
Formula: (Price_t - Price_t-1) / Price_t-1
Example: If a stock goes from $100 to $101, the simple return is (101-100)/100 = 0.01 or 1%
Benefits: More intuitive and easier to understand
rankArray():
This function calculates the rank of each value in an array, which is used for Spearman correlation:
How ranking works:
The smallest value gets rank 1
The second smallest gets rank 2, and so on
For ties (equal values), they get the average of their ranks
Example: For values  
Sorted:  
Ranks:   (the two 2s tie for ranks 1 and 2, so they both get 1.5)
Why this matters: Spearman correlation uses ranks instead of actual values, making it less sensitive to outliers and non-linear relationships.
pearsonCorr():
This function calculates the Pearson correlation coefficient:
Mathematical Formula:
r = (nΣxy - ΣxΣy) / √ 
Where x and y are the two variables, and n is the sample size
What it measures:
The strength and direction of the linear relationship between two variables
Values range from -1 (perfect negative linear relationship) to +1 (perfect positive linear relationship)
0 indicates no linear relationship
Example:
If two stocks have a Pearson correlation of 0.8, they have a strong positive linear relationship
When one stock goes up, the other tends to go up in a fairly consistent proportion
spearmanCorr():
This function calculates the Spearman rank correlation:
How it works:
Convert each value in both datasets to its rank
Calculate the Pearson correlation on the ranks instead of the original values
What it measures:
The strength and direction of the monotonic relationship between two variables
A monotonic relationship is one where as one variable increases, the other either consistently increases or decreases
It doesn't require the relationship to be linear
When to use it instead of Pearson:
When the relationship is monotonic but not linear
When there are significant outliers in the data
When the data is ordinal (ranked) rather than interval/ratio
Example:
If two stocks have a Spearman correlation of 0.7, they have a strong positive monotonic relationship
When one stock goes up, the other tends to go up, but not necessarily in a straight-line relationship
tStatistic():
This function calculates the t-statistic for correlation:
Mathematical Formula: t = r × √((n-2)/(1-r²))
Where r is the correlation coefficient and n is the sample size
What it measures:
How many standard errors the correlation is away from zero
Used to test the null hypothesis that the true correlation is zero
Interpretation:
Larger absolute t-values indicate stronger evidence against the null hypothesis
Generally, a t-value greater than 2 (in absolute terms) is considered statistically significant at the 95% confidence level
criticalT() and pValue():
These functions provide approximations for statistical significance testing:
criticalT():
Returns the critical t-value for a given degrees of freedom (df) and significance level
The critical value is the threshold that the t-statistic must exceed to be considered statistically significant
Uses approximations since Pine Script doesn't have built-in statistical distribution functions
pValue():
Estimates the p-value for a given t-statistic and degrees of freedom
The p-value is the probability of observing a correlation as strong as the one calculated, assuming the true correlation is zero
Smaller p-values indicate stronger evidence against the null hypothesis
Standard interpretation:
p < 0.01: Very strong evidence (marked with **)
p < 0.05: Strong evidence (marked with *)
p ≥ 0.05: Weak evidence, not statistically significant
stdev():
This function calculates the standard deviation of a dataset:
Mathematical Formula: σ = √(Σ(x-μ)²/(n-1))
Where x is each value, μ is the mean, and n is the sample size
What it measures:
The amount of variation or dispersion in a set of values
A low standard deviation indicates that the values tend to be close to the mean
A high standard deviation indicates that the values are spread out over a wider range
Why it matters for correlation:
Standard deviation is used in calculating the correlation coefficient
It also provides information about the volatility of each asset's returns
Comparing standard deviations helps understand the relative riskiness of the two assets.
3.Getting Price Data-
price1: The closing price of the primary asset (the chart you're viewing)
price2: The closing price of the secondary asset (the one you selected in the input parameters)
Returns are used instead of raw prices because:
Returns are typically stationary (mean and variance stay constant over time)
Returns normalize for price levels, allowing comparison between assets of different values
Returns represent what investors actually care about: percentage changes in value
4.Information Table-
Creates a table to display statistics
Only shows on the last bar to avoid performance issues
Positioned in the top right of the chart
Has 2 columns and 15 rows
Populating the Table
The script then populates the table with various statistics:
Header Row: "Metric" and "Value"
Sample Information: Sample size and return type
Pearson Correlation: Value, t-statistic, p-value, and significance
Spearman Correlation: Value, t-statistic, p-value, and significance
Rolling Correlation: Current value
Standard Deviations: For both assets
Interpretation: Text description of the correlation strength
The table uses color coding to highlight important information:
Green for significant positive results
Red for significant negative results
Yellow for borderline significance
Color-coded headers for each section
=> Practical Applications and Interpretation
How to Interpret the Results
Correlation Strength
0.0 to 0.3 (or 0.0 to -0.3): Weak or no correlation
The assets move mostly independently of each other
Good for diversification purposes
0.3 to 0.7 (or -0.3 to -0.7): Moderate correlation
The assets show some tendency to move together (or in opposite directions)
May be useful for certain trading strategies but not extremely reliable
0.7 to 1.0 (or -0.7 to -1.0): Strong correlation
The assets show a strong tendency to move together (or in opposite directions)
Can be useful for pairs trading, hedging, or as a market indicator
Statistical Significance
p < 0.01: Very strong evidence that the correlation is real
Marked with ** in the table
Very unlikely to be due to random chance
p < 0.05: Strong evidence that the correlation is real
Marked with * in the table
Unlikely to be due to random chance
p ≥ 0.05: Weak evidence that the correlation is real
Not marked in the table
Could easily be due to random chance
Rolling Correlation
The rolling correlation shows how the relationship between assets changes over time
If the rolling correlation is much different from the long-term correlation, it suggests the relationship is changing
This can indicate:
A shift in market regime
Changing fundamentals of one or both assets
Temporary market dislocations that might present trading opportunities
Trading Applications
1. Portfolio Diversification
Goal: Reduce overall portfolio risk by combining assets that don't move together
Strategy: Look for assets with low or negative correlations
Example: If you hold tech stocks, you might add some utilities or bonds that have low correlation with tech
2. Pairs Trading
Goal: Profit from the relative price movements of two correlated assets
Strategy:
Find two assets with strong historical correlation
When their prices diverge (one goes up while the other goes down)
Buy the underperforming asset and short the outperforming asset
Close the positions when they converge back to their normal relationship
Example: If Coca-Cola and Pepsi are highly correlated but Coca-Cola drops while Pepsi rises, you might buy Coca-Cola and short Pepsi
3. Hedging
Goal: Reduce risk by taking an offsetting position in a negatively correlated asset
Strategy: Find assets that tend to move in opposite directions
Example: If you hold a portfolio of stocks, you might buy some gold or government bonds that tend to rise when stocks fall
4. Market Analysis
Goal: Understand market dynamics and interrelationships
Strategy: Analyze correlations between different sectors or asset classes
Example:
If tech stocks and semiconductor stocks are highly correlated, movements in one might predict movements in the other
If the correlation between stocks and bonds changes, it might signal a shift in market expectations
5. Risk Management
Goal: Understand and manage portfolio risk
Strategy: Monitor correlations to identify when diversification benefits might be breaking down
Example: During market crises, many assets that normally have low correlations can become highly correlated (correlation convergence), reducing diversification benefits
Advanced Interpretation and Caveats
Correlation vs. Causation
Important Note: Correlation does not imply causation
Example: Ice cream sales and drowning incidents are correlated (both increase in summer), but one doesn't cause the other
Implication: Just because two assets move together doesn't mean one causes the other to move
Solution: Look for fundamental economic reasons why assets might be correlated
Non-Stationary Correlations
Problem: Correlations between assets can change over time
Causes:
Changing market conditions
Shifts in monetary policy
Structural changes in the economy
Changes in the underlying businesses
Solution: Use rolling correlations to monitor how relationships change over time
Outliers and Extreme Events
Problem: Extreme market events can distort correlation measurements
Example: During a market crash, many assets may move in the same direction regardless of their normal relationship
Solution:
Use Spearman correlation, which is less sensitive to outliers
Be cautious when interpreting correlations during extreme market conditions
Sample Size Considerations
Problem: Small sample sizes can produce unreliable correlation estimates
Rule of Thumb: Use at least 30 data points for a rough estimate, 60+ for more reliable results
Solution:
Use the default correlation length of 60 or higher
Be skeptical of correlations calculated with small samples
Timeframe Considerations
Problem: Correlations can vary across different timeframes
Example: Two assets might be positively correlated on a daily basis but negatively correlated on a weekly basis
Solution:
Test correlations on multiple timeframes
Use the timeframe that matches your trading horizon
Look-Ahead Bias
Problem: Using information that wouldn't have been available at the time of trading
Example: Calculating correlation using future data
Solution: This script avoids look-ahead bias by using only historical data
Best Practices for Using This Script
1. Appropriate Parameter Selection
Correlation Window:
For short-term trading: 20-50 periods
For medium-term analysis: 50-100 periods
For long-term analysis: 100-500 periods
Rolling Window:
Should be shorter than the main correlation window
Typically 1/3 to 1/2 of the main window
Return Type:
For most applications: Log Returns (better statistical properties)
For simplicity: Simple Returns (easier to interpret)
2. Validation and Testing
Out-of-Sample Testing:
Calculate correlations on one time period
Test if they hold in a different time period
Multiple Timeframes:
Check if correlations are consistent across different timeframes
Economic Rationale:
Ensure there's a logical reason why assets should be correlated
3. Monitoring and Maintenance
Regular Review:
Correlations can change, so review them regularly
Alerts:
Set up alerts for significant correlation changes
Documentation:
Keep notes on why certain assets are correlated and what might change that relationship
4. Integration with Other Analysis
Fundamental Analysis:
Combine correlation analysis with fundamental factors
Technical Analysis:
Use correlation analysis alongside technical indicators
Market Context:
Consider how market conditions might affect correlations
Conclusion
This Scientific Correlation Testing Framework provides a comprehensive tool for analyzing relationships between financial assets. By offering both Pearson and Spearman correlation methods, statistical significance testing, and rolling correlation analysis, it goes beyond simple correlation measures to provide deeper insights.
For beginners, this script might seem complex, but it's built on fundamental statistical concepts that become clearer with use. Start with the default settings and focus on interpreting the main correlation lines and the statistics table. As you become more comfortable, you can adjust the parameters and explore more advanced applications.
Remember that correlation analysis is just one tool in a trader's toolkit. It should be used in conjunction with other forms of analysis and with a clear understanding of its limitations. When used properly, it can provide valuable insights for portfolio construction, risk management, and pair trading strategy development.
LibPvotLibrary   "LibPvot" 
This is a library for advanced technical analysis, specializing
in two core areas: the detection of price-oscillator
divergences and the analysis of market structure. It provides
a back-end engine for signal detection and a toolkit for
indicator plotting.
Key Features:
1.  **Complete Divergence Suite (Class A, B, C):** The engine detects
all three major types of divergences, providing a full spectrum of
analytical signals:
- **Regular (A):** For potential trend reversals.
- **Hidden (B):** For potential trend continuations.
- **Exaggerated (C):** For identifying weakness at double tops/bottoms.
2.  **Advanced Signal Filtering:** The detection logic uses a
percentage-based price tolerance (`prcTol`). This feature
enables the practical detection of Exaggerated divergences
(which rarely occur at the exact same price) and creates a
"dead zone" to filter insignificant noise from triggering
Regular divergences.
3.  **Pivot Synchronization:** A bar tolerance (`barTol`) is used
to reliably match price and oscillator pivots that do not
align perfectly on the same bar, preventing missed signals.
4.  **Signal Invalidation Logic:** Features two built-in invalidation
rules:
- An optional `invalidate` parameter automatically terminates
active divergences if the price or the oscillator breaks
the level of the confirming pivot.
- The engine also discards 'half-pivots' (e.g., a price pivot)
if a corresponding oscillator pivot does not appear within
the `barTol` window.
5.  **Stateful Plotting Helpers:** Provides helper functions
(`bullDivPos` and `bearDivPos`) that abstract away the
state management issues of visualizing persistent signals.
They generate gap-free, accurately anchored data series
ready to be used in `plotshape` functions, simplifying
indicator-side code.
6.  **Rich Data Output:** The core detection functions (`bullDiv`, `bearDiv`)
return a comprehensive 9-field data tuple. This includes the
boolean flags for each divergence type and the precise
coordinates (price, oscillator value, bar index) of both the
starting and the confirming pivots.
7.  **Market Structure & Trend Analysis:** Includes a
`marketStructure` function to automatically identify pivot
highs/lows, classify their relationship (HH, LH, LL, HL),
detect structure breaks, and determine the current trend
state (Up, Down, Neutral) based on pivot sequences.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 bullDiv(priceSrc, oscSrc, leftLen, rightLen, depth, barTol, prcTol, persist, invalidate) 
  Detects bullish divergences (Regular, Hidden, Exaggerated) based on pivot lows.
  Parameters:
     priceSrc (float) : series float  Price series to check for pivots (e.g., `low`).
     oscSrc (float) : series float  Oscillator series to check for pivots.
     leftLen (int) : series int    Number of bars to the left of a pivot (default 5).
     rightLen (int) : series int    Number of bars to the right of a pivot (default 5).
     depth (int) : series int    Maximum number of stored pivot pairs to check against (default 2).
     barTol (int) : series int    Maximum bar distance allowed between the price pivot and the oscillator pivot (default 3).
     prcTol (float) : series float  The percentage tolerance for comparing pivot prices. Used to detect Exaggerated
divergences and filter out market noise (default 0.05%).
     persist (bool) : series bool   If `true` (default), the divergence flag stays active for the entire duration of the signal.
If `false`, it returns a single-bar pulse on detection.
     invalidate (bool) : series bool   If `true` (default), terminates an active divergence if price or oscillator break
below the confirming pivot low.
  Returns:   A tuple containing comprehensive data for a detected bullish divergence.
regBull       series bool   `true` if a Regular bullish divergence (Class A) is active.
hidBull       series bool   `true` if a Hidden bullish divergence (Class B) is active.
exgBull       series bool   `true` if an Exaggerated bullish divergence (Class C) is active.
initPivotPrc  series float  Price value of the initial (older) pivot low.
initPivotOsz  series float  Oscillator value of the initial pivot low.
initPivotBar  series int    Bar index of the initial pivot low.
lastPivotPrc  series float  Price value of the last (confirming) pivot low.
lastPivotOsz  series float  Oscillator value of the last pivot low.
lastPivotBar  series int    Bar index of the last pivot low.
 bearDiv(priceSrc, oscSrc, leftLen, rightLen, depth, barTol, prcTol, persist, invalidate) 
  Detects bearish divergences (Regular, Hidden, Exaggerated) based on pivot highs.
  Parameters:
     priceSrc (float) : series float  Price series to check for pivots (e.g., `high`).
     oscSrc (float) : series float  Oscillator series to check for pivots.
     leftLen (int) : series int    Number of bars to the left of a pivot (default 5).
     rightLen (int) : series int    Number of bars to the right of a pivot (default 5).
     depth (int) : series int    Maximum number of stored pivot pairs to check against (default 2).
     barTol (int) : series int    Maximum bar distance allowed between the price pivot and the oscillator pivot (default 3).
     prcTol (float) : series float  The percentage tolerance for comparing pivot prices. Used to detect Exaggerated
divergences and filter out market noise (default 0.05%).
     persist (bool) : series bool   If `true` (default), the divergence flag stays active for the entire duration of the signal.
If `false`, it returns a single-bar pulse on detection.
     invalidate (bool) : series bool   If `true` (default), terminates an active divergence if price or oscillator break
above the confirming pivot high.
  Returns:   A tuple containing comprehensive data for a detected bearish divergence.
regBear       series bool   `true` if a Regular bearish divergence (Class A) is active.
hidBear       series bool   `true` if a Hidden bearish divergence (Class B) is active.
exgBear       series bool   `true` if an Exaggerated bearish divergence (Class C) is active.
initPivotPrc  series float  Price value of the initial (older) pivot high.
initPivotOsz  series float  Oscillator value of the initial pivot high.
initPivotBar  series int    Bar index of the initial pivot high.
lastPivotPrc  series float  Price value of the last (confirming) pivot high.
lastPivotOsz  series float  Oscillator value of the last pivot high.
lastPivotBar  series int    Bar index of the last pivot high.
 bullDivPos(regBull, hidBull, exgBull, rightLen, yPos) 
  Calculates the plottable data series for bullish divergences. It manages
the complex state of a persistent signal's plotting window to ensure
gap-free and accurately anchored visualization.
  Parameters:
     regBull (bool) : series bool   The regular bullish divergence flag from `bullDiv`.
     hidBull (bool) : series bool   The hidden bullish divergence flag from `bullDiv`.
     exgBull (bool) : series bool   The exaggerated bullish divergence flag from `bullDiv`.
     rightLen (int) : series int    The same `rightLen` value used in `bullDiv` for correct timing.
     yPos (float) : series float  The series providing the base Y-coordinate for the shapes (e.g., `low`).
  Returns:   A tuple of three `series float` for plotting bullish divergences.
regBullPosY  series float  Contains the static anchor Y-value for Regular divergences where a shape should be plotted; `na` otherwise.
hidBullPosY  series float  Contains the static anchor Y-value for Hidden divergences where a shape should be plotted; `na` otherwise.
exgBullPosY  series float  Contains the static anchor Y-value for Exaggerated divergences where a shape should be plotted; `na` otherwise.
 bearDivPos(regBear, hidBear, exgBear, rightLen, yPos) 
  Calculates the plottable data series for bearish divergences. It manages
the complex state of a persistent signal's plotting window to ensure
gap-free and accurately anchored visualization.
  Parameters:
     regBear (bool) : series bool   The regular bearish divergence flag from `bearDiv`.
     hidBear (bool) : series bool   The hidden bearish divergence flag from `bearDiv`.
     exgBear (bool) : series bool   The exaggerated bearish divergence flag from `bearDiv`.
     rightLen (int) : series int    The same `rightLen` value used in `bearDiv` for correct timing.
     yPos (float) : series float  The series providing the base Y-coordinate for the shapes (e.g., `high`).
  Returns:   A tuple of three `series float` for plotting bearish divergences.
regBearPosY  series float  Contains the static anchor Y-value for Regular divergences where a shape should be plotted; `na` otherwise.
hidBearPosY  series float  Contains the static anchor Y-value for Hidden divergences where a shape should be plotted; `na` otherwise.
exgBearPosY  series float  Contains the static anchor Y-value for Exaggerated divergences where a shape should be plotted; `na` otherwise.
 marketStructure(highSrc, lowSrc, leftLen, rightLen, srcTol) 
  Analyzes the market structure by identifying pivot points, classifying
their sequence (e.g., Higher Highs, Lower Lows), and determining the
prevailing trend state.
  Parameters:
     highSrc (float) : series float  Price series for pivot high detection (e.g., `high`).
     lowSrc (float) : series float  Price series for pivot low detection (e.g., `low`).
     leftLen (int) : series int    Number of bars to the left of a pivot (default 5).
     rightLen (int) : series int    Number of bars to the right of a pivot (default 5).
     srcTol (float) : series float  Percentage tolerance to consider two pivots as 'equal' (default 0.05%).
  Returns:   A tuple containing detailed market structure information.
pivType     series PivType  The type of the most recently formed pivot (e.g., `hh`, `ll`).
lastPivHi   series float    The price level of the last confirmed pivot high.
lastPivLo   series float    The price level of the last confirmed pivot low.
lastPiv     series float    The price level of the last confirmed pivot (either high or low).
pivHiBroken series bool     `true` if the price has broken above the last pivot high.
pivLoBroken series bool     `true` if the price has broken below the last pivot low.
trendState  series TrendState The current trend state (`up`, `down`, or `neutral`).
Multi-Day SMAmade this script due to the frustration of not having the 5 day SMA added with the 10 20 and 50. I need the 5 SMA for my type of trading to determine when to sell with stocks showing exponential growth. 
so heres this: Multi SMA 
5 day SMA pink 
10 day SMA white
20 day SMA blue
50 day SMA red
200 day SMA green
LibTmFrLibrary   "LibTmFr" 
This is a utility library for handling timeframes and
multi-timeframe (MTF) analysis in Pine Script. It provides a
collection of functions designed to handle common tasks related
to period detection, session alignment, timeframe construction,
and time calculations, forming a foundation for
MTF indicators.
Key Capabilities:
1.  **MTF Period Engine:** The library includes functions for
managing higher-timeframe (HTF) periods.
- **Period Detection (`isNewPeriod`):** Detects the first bar
of a given timeframe. It includes custom logic to handle
multi-month and multi-year intervals where
`timeframe.change()` may not be sufficient.
- **Bar Counting (`sinceNewPeriod`):** Counts the number of
bars that have passed in the current HTF period or
returns the final count for a completed historical period.
2.  **Automatic Timeframe Selection:** Offers functions for building
a top-down analysis framework:
- **Automatic HTF (`autoHTF`):** Suggests a higher timeframe
(HTF) for broader context based on the current timeframe.
- **Automatic LTF (`autoLTF`):** Suggests an appropriate lower
timeframe (LTF) for granular intra-bar analysis.
3.  **Timeframe Manipulation and Comparison:** Includes tools for
working with timeframe strings:
- **Build & Split (`buildTF`, `splitTF`):** Functions to
programmatically construct valid Pine Script timeframe
strings (e.g., "4H") and parse them back into their
numeric and unit components.
- **Comparison (`isHigherTF`, `isActiveTF`, `isLowerTF`):**
A set of functions to check if a given timeframe is
higher, lower, or the same as the script's active timeframe.
- **Multiple Validation (`isMultipleTF`):** Checks if a
higher timeframe is a practical multiple of the current
timeframe. This is based on the assumption that checking
if recent, completed HTF periods contained more than one
bar is a valid proxy for preventing data gaps.
4.  **Timestamp Interpolation:** Contains an `interpTimestamp()`
function that calculates an absolute timestamp by
interpolating at a given percentage across a specified
range of bars (e.g., 50% of the way through the last
20 bars), enabling time calculations at a resolution
finer than the chart's native bars.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 buildTF(quantity, unit) 
  Builds a Pine Script timeframe string from a numeric quantity and a unit enum.
The resulting string can be used with `request.security()` or `input.timeframe`.
  Parameters:
     quantity (int) : series int     Number to specifie how many `unit` the timeframe spans.
     unit (series TFUnit) : series TFUnit  The size category for the bars.
  Returns: series string  A Pine-style timeframe identifier, e.g.
"5S"   → 5-seconds bars
"30"   → 30-minute bars
"120"  → 2-hour bars
"1D"   → daily bars
"3M"   → 3-month bars
"24M"  → 2-year bars
 splitTF(tf) 
  Splits a Pine‑timeframe identifier into numeric quantity and unit (TFUnit).
  Parameters:
     tf (string) : series string   Timeframe string, e.g.
"5S", "30", "120", "1D", "3M", "24M".
  Returns:  
quantity   series int     The numeric value of the timeframe (e.g., 15 for "15", 3 for "3M").
unit       series TFUnit  The unit of the timeframe (e.g., TFUnit.minutes, TFUnit.months).
Notes on strings without a suffix:
• Pure digits are minutes; if divisible by 60, they are treated as hours.
• An "M" suffix is months; if divisible by 12, it is converted to years.
 autoHTF(tf) 
  Picks an appropriate **higher timeframe (HTF)** relative to the selected timeframe.
It steps up along a coarse ladder to produce sensible jumps for top‑down analysis.
Mapping → chosen HTF:
≤  1 min  →  60  (1h)          ≈ ×60
≤  3 min  → 180  (3h)          ≈ ×60
≤  5 min  → 240  (4h)          ≈ ×48
≤ 15 min  →  D   (1 day)       ≈ ×26–×32   (regular session 6.5–8 h)
> 15 min  →  W   (1 week)      ≈ ×64–×80 for 30m; varies with input
≤  1 h    →  W   (1 week)      ≈ ×32–×40
≤  4 h    →  M   (1 month)     ≈ ×36–×44   (~22 trading days / month)
>  4 h    →  3M  (3 months)    ≈ ×36–×66   (e.g., 12h→×36–×44; 8h→×53–×66)
≤  1 day  →  3M  (3 months)    ≈ ×60–×66   (~20–22 trading days / month)
>  1 day  → 12M  (1 year)      ≈ ×(252–264)/quantity
≤  1 week → 12M  (1 year)      ≈ ×52
>  1 week → 48M  (4 years)     ≈ ×(208)/quantity
=  1 M    → 48M  (4 years)     ≈ ×48
>  1 M    → error ("HTF too big")
any       → error ("HTF too big")
Notes:
• Inputs in months or years are restricted: only 1M is allowed; larger months/any years throw.
• Returns a Pine timeframe string usable in `request.security()` and `input.timeframe`.
  Parameters:
     tf (string) : series string   Selected timeframe (e.g., "D", "240", or `timeframe.period`).
  Returns: series string   Suggested higher timeframe.
 autoLTF(tf) 
  Selects an appropriate **lower timeframe LTF)** for intra‑bar evaluation
based on the selected timeframe. The goal is to keep intra‑bar
loops performant while providing enough granularity.
Mapping → chosen LTF:
≤  1 min  →  1S      ≈ ×60
≤  5 min  →  5S      ≈ ×60
≤ 15 min  → 15S      ≈ ×60
≤ 30 min  → 30S      ≈ ×60
> 30 min  → 60S (1m) ≈ ×31–×59   (for 31–59 minute charts)
≤  1 h    →  1  (1m) ≈ ×60
≤  2 h    →  2  (2m) ≈ ×60
≤  4 h    →  5  (5m) ≈ ×48
>  4 h    → 15 (15m) ≈ ×24–×48   (e.g., 6h→×24, 8h→×32, 12h→×48)
≤  1 day  → 15 (15m) ≈ ×26–×32   (regular sessions ~6.5–8h)
>  1 day  → 60 (60m) ≈ ×(26–32)  per day × quantity
≤  1 week → 60 (60m) ≈ ×32–×40   (≈5 sessions of ~6.5–8h)
>  1 week → 240 (4h) ≈ ×(8–10)   per week × quantity
≤  1 M    → 240 (4h) ≈ ×33–×44   (~20–22 sessions × 6.5–8h / 4h)
≤  3 M    →  D  (1d) ≈ ×(20–22)  per month × quantity
>  3 M    →  W  (1w) ≈ ×(4–5)    per month × quantity
≤  1 Y    →  W  (1w) ≈ ×52
>  1 Y    →  M  (1M) ≈ ×12       per year × quantity
Notes:
• Ratios for D/W/M are given as ranges because they depend on
**regular session length** (typically ~6.5–8h, not 24h).
• Returned strings can be used with `request.security()` and `input.timeframe`.
  Parameters:
     tf (string) : series string   Selected timeframe (e.g., "D", "240", or timeframe.period).
  Returns: series string   Suggested lower TF to use for intra‑bar work.
 isNewPeriod(tf, offset) 
  Returns `true` when a new session-aligned period begins, or on the Nth bar of that period.
  Parameters:
     tf (string) : series string  Target higher timeframe (e.g., "D", "W", "M").
     offset (simple int) : simple int     0 → checks for the first bar of the new period.
1+ → checks for the N-th bar of the period.
  Returns: series bool    `true` if the condition is met.
 sinceNewPeriod(tf, offset) 
  Counts how many bars have passed within a higher timeframe (HTF) period.
For daily, weekly, and monthly resolutions, the period is aligned with the trading session.
  Parameters:
     tf (string) : series string  Target parent timeframe (e.g., "60", "D").
     offset (simple int) : simple int     0  → Running count for the current period.
1+ → Finalized count for the Nth most recent *completed* period.
  Returns: series int     Number of bars.
 isHigherTF(tf, main) 
  Returns `true` when the selected timeframe represents a
higher resolution than the active timeframe.
  Parameters:
     tf (string) : series string  Selected timeframe.
     main (bool) : series bool    When `true`, the comparison is made against the chart's main timeframe
instead of the script's active timeframe. Optional. Defaults to `false`.
  Returns: series bool    `true` if `tf` > active TF; otherwise `false`.
 isActiveTF(tf, main) 
  Returns `true` when the selected timeframe represents the
exact resolution of the active timeframe.
  Parameters:
     tf (string) : series string  Selected timeframe.
     main (bool) : series bool    When `true`, the comparison is made against the chart's main timeframe
instead of the script's active timeframe. Optional. Defaults to `false`.
  Returns: series bool    `true` if `tf` == active TF; otherwise `false`.
 isLowerTF(tf, main) 
  Returns `true` when the selected timeframe represents a
lower resolution than the active timeframe.
  Parameters:
     tf (string) : series string  Selected timeframe.
     main (bool) : series bool    When `true`, the comparison is made against the chart's main timeframe
instead of the script's active timeframe. Optional. Defaults to `false`.
  Returns: series bool    `true` if `tf` < active TF; otherwise `false`.
 isMultipleTF(tf) 
  Returns `true` if the selected timeframe (`tf`) is a practical multiple
of the active skript's timeframe. It verifies this by checking if `tf` is a higher timeframe
that has consistently contained more than one bar of the skript's timeframe in recent periods.
The period detection is session-aware.
  Parameters:
     tf (string) : series string  The higher timeframe to check.
  Returns: series bool    `true` if `tf` is a practical multiple; otherwise `false`.
 interpTimestamp(offStart, offEnd, pct) 
  Calculates a precise absolute timestamp by interpolating within a bar range based on a percentage.
This version works with RELATIVE bar offsets from the current bar.
  Parameters:
     offStart (int) : series int    The relative offset of the starting bar (e.g., 10 for 10 bars ago).
     offEnd (int) : series int    The relative offset of the ending bar (e.g., 1 for 1 bar ago). Must be <= offStart.
     pct (float) : series float  The percentage of the bar range to measure (e.g., 50.5 for 50.5%).
Values are clamped to the   range.
  Returns: series int    The calculated, interpolated absolute Unix timestamp in milliseconds.
LibVolmLibrary   "LibVolm" 
This library provides a collection of core functions for volume and
money flow analysis. It offers implementations of several classic
volume-based indicators, with a focus on flexibility
for applications like multi-timeframe and session-based analysis.
Key Features:
1.  **Suite of Classic Volume Indicators:** Includes standard
implementations of several foundational indicators:
- **On Balance Volume (`obv`):** A momentum indicator that
accumulates volume based on price direction.
- **Accumulation/Distribution Line (`adLine`):** Measures cumulative
money flow using the close's position within the bar's range.
- **Chaikin Money Flow (`cmf`):** An oscillator version of the ADL
that measures money flow over a specified lookback period.
2.  **Anchored/Resettable Indicators:** The library includes flexible,
resettable indicators ideal for cyclical analysis:
- **Anchored VWAP (`vwap`):** Calculates a Volume Weighted Average
Price that can be reset on any user-defined `reset` condition.
It returns both the VWAP and the number of bars (`prdBars`) in
the current period.
- **Resettable CVD (`cvd`):** Computes a Cumulative Volume Delta
that can be reset on a custom `reset` anchor. The function
also tracks and returns the highest (`hi`) and lowest (`lo`)
delta values reached within the current period.
(Note: The delta sign is determined by a specific logic:
it first checks close vs. open, then close vs. prior
close, and persists the last non-zero sign).
3.  **Volume Sanitization:** All functions that use the built-in
`volume` variable automatically sanitize it via an internal
function. This process replaces `na` values with 0 and ensures
no negative volume values are used, providing stable calculations.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 obv(price) 
  Calculates the On Balance Volume (OBV) cumulative indicator.
  Parameters:
     price (float) : series float  Source price series, typically the close.
  Returns: series float  Cumulative OBV value.
 adLine() 
  Computes the Accumulation/Distribution Line (AD Line).
  Returns: series float  Cumulative AD Line value.
 cmf(length) 
  Computes Chaikin Money Flow (CMF).
  Parameters:
     length (int) : series int    Lookback length for the CMF calculation.
  Returns: series float  CMF value.
 vwap(price, reset) 
  Calculates an anchored Volume Weighted Average Price (VWAP).
  Parameters:
     price (float) : series float   Source price series (usually *close*).
     reset (bool) : series bool    A signal that is *true* on the bar where the
accumulation should be reset.
  Returns:  
vwap     series float  The calculated Volume Weighted Average Price for the current period.
prdBars  series int    The number of bars that have passed since the last reset.
 cvd(reset) 
  Calculates a resettable, cumulative Volume Delta (CVD).
It accumulates volume delta and tracks its high/low range. The
accumulation is reset to zero whenever the `reset` condition is true.
This is useful for session-based analysis, intra-bar calculations,
or any other custom-anchored accumulation.
  Parameters:
     reset (bool) : series bool   A signal that is *true* on the bar where the
accumulation should be reset.
  Returns:  
cum  series float  The current cumulative volume delta.
hi   series float  The highest peak the cumulative delta has reached in the current period.
lo   series float  The lowest trough the cumulative delta has reached in the current period.
LibMvAvLibrary   "LibMvAv" 
This library provides a unified interface for calculating a
wide variety of moving averages. It is designed to simplify
indicator development by consolidating numerous MA calculations
into a single function and integrating the weighting
capabilities from the `LibWght` library.
Key Features:
1.  **All-in-One MA Function:** The core of the library is the
`ma()` function. Users can select the desired calculation
method via the `MAType` enum, which helps create
cleaner and more maintainable code compared to using
many different `ta.*` or custom functions.
2.  **Comprehensive Selection of MA Types:** It provides a
selection of 12 different moving averages, covering
common Pine Script built-ins and their weighted counterparts:
- **Standard MAs:** SMA, EMA, WMA, RMA (Wilder's), HMA (Hull), and
LSMA (Least Squares / Linear Regression).
- **Weighted MAs:** Weight-enhanced versions of the above
(WSMA, WEMA, WWMA, WRMA, WHMA, WLSMA).
3.  **Integrated Weighting:** The library provides weighted versions
for each of its standard MA types (e.g., `wsma` alongside `sma`).
By acting as a dispatcher, the `ma()` function allows these
weighted calculations to be called using the optional
`weight` parameter, which are then processed by the `LibWght`
library.
4.  **Simple API:** The library internally handles the logic of
choosing the correct function based on the selected `MAType`.
The user only needs to provide the source, length, and
optional weight, simplifying the development process.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 ma(maType, source, length, weight) 
  Returns the requested moving average.
  Parameters:
     maType (simple MAType) : simple MAType Desired type (see enum above).
     source (float) : series float  Data series to smooth.
     length (simple int) : simple int    Look-back / period length.
     weight (float) : series float  Weight series (default = na)
  Returns: series float  Moving-average value.
LibWghtLibrary   "LibWght" 
This is a library of mathematical and statistical functions
designed for quantitative analysis in Pine Script. Its core
principle is the integration of a custom weighting series
(e.g., volume) into a wide array of standard technical
analysis calculations.
Key Capabilities:
1.  **Universal Weighting:** All exported functions accept a `weight`
parameter. This allows standard calculations (like moving
averages, RSI, and standard deviation) to be influenced by an
external data series, such as volume or tick count.
2.  **Weighted Averages and Indicators:** Includes a comprehensive
collection of weighted functions:
- **Moving Averages:** `wSma`, `wEma`, `wWma`, `wRma` (Wilder's),
`wHma` (Hull), and `wLSma` (Least Squares / Linear Regression).
- **Oscillators & Ranges:** `wRsi`, `wAtr` (Average True Range),
`wTr` (True Range), and `wR` (High-Low Range).
3.  **Volatility Decomposition:** Provides functions to decompose
total variance into distinct components for market analysis.
- **Two-Way Decomposition (`wTotVar`):** Separates variance into
**between-bar** (directional) and **within-bar** (noise)
components.
- **Three-Way Decomposition (`wLRTotVar`):** Decomposes variance
relative to a linear regression into **Trend** (explained by
the LR slope), **Residual** (mean-reversion around the
LR line), and **Within-Bar** (noise) components.
- **Local Volatility (`wLRLocTotStdDev`):** Measures the total
"noise" (within-bar + residual) around the trend line.
4.  **Weighted Statistics and Regression:** Provides a robust
function for Weighted Linear Regression (`wLinReg`) and a
full suite of related statistical measures:
- **Between-Bar Stats:** `wBtwVar`, `wBtwStdDev`, `wBtwStdErr`.
- **Residual Stats:** `wResVar`, `wResStdDev`, `wResStdErr`.
5.  **Fallback Mechanism:** All functions are designed for reliability.
If the total weight over the lookback period is zero (e.g., in
a no-volume period), the algorithms automatically fall back to
their unweighted, uniform-weight equivalents (e.g., `wSma`
becomes a standard `ta.sma`), preventing errors and ensuring
continuous calculation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 wSma(source, weight, length) 
  Weighted Simple Moving Average (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
the arithmetic mean if Σweight = 0.
 wEma(source, weight, length) 
  Weighted EMA (exponential kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Exponential-kernel weighted mean; falls
back to classic EMA if Σweight = 0.
 wWma(source, weight, length) 
  Weighted WMA (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
classic WMA if Σweight = 0.
 wRma(source, weight, length) 
  Weighted RMA (Wilder kernel, α = 1/len).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Wilder-kernel weighted mean; falls back to
classic RMA if Σweight = 0.
 wHma(source, weight, length) 
  Weighted HMA (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
classic HMA if Σweight = 0.
 wRsi(source, weight, length) 
  Weighted Relative Strength Index.
  Parameters:
     source (float) : series float  Price series.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Weighted RSI; uniform if Σw = 0.
 wAtr(tr, weight, length) 
  Weighted ATR (Average True Range).
Implemented as WRMA on *true range*.
  Parameters:
     tr (float) : series float  True Range series.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Weighted ATR; uniform weights if Σw = 0.
 wTr(tr, weight, length) 
  Weighted True Range over a window.
  Parameters:
     tr (float) : series float  True Range series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Weighted mean of TR; uniform if Σw = 0.
 wR(r, weight, length) 
  Weighted High-Low Range over a window.
  Parameters:
     r (float) : series float  High-Low per bar.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Weighted mean of range; uniform if Σw = 0.
 wBtwVar(source, weight, length, biased) 
  Weighted Between Variance (biased/unbiased).
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns:  
variance  series float  The calculated between-bar variance (σ²btw), either biased or unbiased.
sumW      series float  The sum of weights over the lookback period (Σw).
sumW2     series float  The sum of squared weights over the lookback period (Σw²).
 wBtwStdDev(source, weight, length, biased) 
  Weighted Between Standard Deviation.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σbtw uniform if Σw = 0.
 wBtwStdErr(source, weight, length, biased) 
  Weighted Between Standard Error.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²btw / N_eff) uniform if Σw = 0.
 wTotVar(mu, sigma, weight, length, biased) 
  Weighted Total Variance (= between-group + within-group).
Useful when each bar represents an aggregate with its own
mean* and pre-estimated σ (e.g., second-level ranges inside a
1-minute bar). Assumes the *weight* series applies to both the
group means and their σ estimates.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns:  
varBtw  series float  The between-bar variance component (σ²btw).
varWtn  series float  The within-bar variance component (σ²wtn).
sumW    series float  The sum of weights over the lookback period (Σw).
sumW2   series float  The sum of squared weights over the lookback period (Σw²).
 wTotStdDev(mu, sigma, weight, length, biased) 
  Weighted Total Standard Deviation.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σtot.
 wTotStdErr(mu, sigma, weight, length, biased) 
  Weighted Total Standard Error.
SE = √( total variance / N_eff ) with the same effective sample
size logic as `wster()`.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²tot / N_eff).
 wLinReg(source, weight, length) 
  Weighted Linear Regression.
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weight series.
     length (int) : series int     Look-back length ≥ 2.
  Returns:  
mid        series float  The estimated value of the regression line at the most recent bar.
slope      series float  The slope of the regression line.
intercept  series float  The intercept of the regression line.
 wResVar(source, weight, midLine, slope, length, biased) 
  Weighted Residual Variance.
linear regression – optionally biased (population) or
unbiased (sample).
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weighting series (volume, etc.).
     midLine (float) : series float   Regression value at the last bar.
     slope (float) : series float   Slope per bar.
     length (int) : series int     Look-back length ≥ 2.
     biased (bool) : series bool    true  → population variance (σ²_P), denominator ≈ N_eff.
false → sample variance (σ²_S), denominator ≈ N_eff - 2.
(Adjusts for 2 degrees of freedom lost to the regression).
  Returns:  
variance  series float  The calculated residual variance (σ²res), either biased or unbiased.
sumW      series float  The sum of weights over the lookback period (Σw).
sumW2     series float  The sum of squared weights over the lookback period (Σw²).
 wResStdDev(source, weight, midLine, slope, length, biased) 
  Weighted Residual Standard Deviation.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     midLine (float) : series float  Regression value at the last bar.
     slope (float) : series float  Slope per bar.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σres; uniform if Σw = 0.
 wResStdErr(source, weight, midLine, slope, length, biased) 
  Weighted Residual Standard Error.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     midLine (float) : series float  Regression value at the last bar.
     slope (float) : series float  Slope per bar.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²res / N_eff); uniform if Σw = 0.
 wLRTotVar(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Variance **around the
window’s weighted mean μ**.
σ²_tot =  E_w    ⟶  *within-group variance*
+ Var_w   ⟶  *residual variance*
+ Var_w   ⟶  *trend variance*
where each bar i in the look-back window contributes
m_i   = *mean*      (e.g. 1-sec HL2)
σ_i   = *sigma*     (pre-estimated intrabar σ)
w_i   = *weight*    (volume, ticks, …)
ŷ_i   = b₀ + b₁·x   (value of the weighted LR line)
r_i   = m_i − ŷ_i   (orthogonal residual)
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns:  
varRes  series float  The residual variance component (σ²res).
varWtn  series float  The within-bar variance component (σ²wtn).
varTrd  series float  The trend variance component (σ²trd), explained by the linear regression.
sumW    series float  The sum of weights over the lookback period (Σw).
sumW2   series float  The sum of squared weights over the lookback period (Σw²).
 wLRTotStdDev(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Standard Deviation.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √(σ²tot).
 wLRTotStdErr(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Standard Error.
SE = √( σ²_tot / N_eff )  with N_eff = Σw² / Σw²  (like in wster()).
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √((σ²res, σ²wtn, σ²trd) / N_eff).
 wLRLocTotStdDev(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Local Total Standard Deviation.
Measures the total "noise" (within-bar + residual) around the trend.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √(σ²wtn + σ²res).
 wLRLocTotStdErr(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Local Total Standard Error.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √((σ²wtn + σ²res) / N_eff).
 wLSma(source, weight, length) 
  Weighted Least Square Moving Average.
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weight series.
     length (int) : series int     Look-back length ≥ 2.
  Returns: series float   Least square weighted mean. Falls back
to unweighted regression if Σw = 0.
5 SMA/EMA_ZigzagThis indicator combines five SMA/EMA/WMA lines with the “ZigZag with Fibonacci Levels” indicator by LonesomeTheBlue, designed to trade according to Thắng Đoàn SMT’s method.
EMA 21 34
Zigzag 3/5
Mandelbrot Fractal DimensionThe Mandelbrot Fractal Dimension (D) measures the  information density  and  path complexity  of price movements. It quantifies how much a price path fills the space between its starting and ending points:
 
   D ≈ 1.0 : Strong trending behavior (minimal complexity, high predictability)
   D ≈ 1.5 : Random walk behavior (maximum complexity, no structure)
   D > 1.5 : Mean-reverting behavior (high complexity, bounded movement)
 
Reference the given  link for documentation .
[Parth🇮🇳] Wall Street US30 Pro - Prop Firm Edition....Yo perfect! Here's the COMPLETE strategy in simple words:
***
## WALL STREET US30 TRADING STRATEGY - SIMPLE VERSION
### WHAT YOU'RE TRADING:
US30 (Dow Jones Index) on 1-hour chart using a professional indicator with smart money concepts.
---
### WHEN TO TRADE:
**6:30 PM - 10:00 PM IST every day** (London-NY overlap = highest volume)
***
### THE INDICATOR SHOWS YOU:
A table in top-right corner with 5 things:
1. **Signal Strength** - How confident (need 70%+)
2. **RSI** - Momentum (need OK status)
3. **MACD** - Trend direction (need UP for buys, DOWN for sells)
4. **Volume** - Real or fake move (need HIGH)
5. **Trend** - Overall direction (need UP for buys, DOWN for sells)
Plus **green arrows** (buy signals) and **red arrows** (sell signals).
---
### THE RULES:
**When GREEN ▲ arrow appears:**
- Wait for 1-hour candle to close (don't rush in)
- Check the table:
  - Signal Strength 70%+ ? ✅
  - Volume HIGH? ✅
  - RSI okay? ✅
  - MACD up? ✅
  - Trend up? ✅
- If all yes = ENTER LONG (BUY)
- Set stop loss 40-50 pips below entry
- Set take profit 2x the risk (2:1 ratio)
**When RED ▼ arrow appears:**
- Wait for 1-hour candle to close (don't rush in)
- Check the table:
  - Signal Strength 70%+ ? ✅
  - Volume HIGH? ✅
  - RSI okay? ✅
  - MACD down? ✅
  - Trend down? ✅
- If all yes = ENTER SHORT (SELL)
- Set stop loss 40-50 pips above entry
- Set take profit 2x the risk (2:1 ratio)
***
### REAL EXAMPLE:
**7:45 PM IST - Green arrow appears**
Table shows:
- Signal Strength: 88% 🔥
- RSI: 55 OK
- MACD: ▲ UP
- Volume: 1.8x HIGH
- Trend: 🟢 UP
All checks pass ✅
**8:00 PM - Candle closes, signal confirmed**
I check table again - still strong ✓
**I enter on prop firm:**
- BUY 0.1 lot
- Entry: 38,450
- Stop Loss: 38,400 (50 pips below)
- Take Profit: 38,550 (100 pips above)
- Risk: $50
- Reward: $100
- Ratio: 1:2 ✅
**9:30 PM - Price hits 38,550**
- Take profit triggered ✓
- +$100 profit
- Trade closes
**Done for that signal!**
***
### YOUR DAILY ROUTINE:
**6:30 PM IST** - Open TradingView + prop firm
**6:30 PM - 10 PM IST** - Watch for signals
**When signal fires** - Check table, enter if strong
**10:00 PM IST** - Close all trades, done
**Expected daily** - 1-3 signals, +$100-300 profit
***
### EXPECTED RESULTS:
**Win Rate:** 65-75% (most trades win)
**Signals per day:** 1-3
**Profit per trade:** $50-200
**Daily profit:** $100-300
**Monthly profit:** $2,000-6,000
**Monthly return:** 20-30% (on $10K account)
---
### WHAT MAKES THIS WORK:
✅ Uses 7+ professional filters (not just 1 indicator)
✅ Checks volume (real moves only)
✅ Filters overbought/oversold (avoids tops/bottoms)
✅ Aligns with 4-hour trend (higher timeframe)
✅ Only trades peak volume hours (6:30-10 PM IST)
✅ Uses support/resistance (institutional levels)
✅ Risk/reward 2:1 minimum (math works out)
***
### KEY DISCIPLINE RULES:
**DO:**
- ✅ Only trade 6:30-10 PM IST
- ✅ Wait for candle to close
- ✅ Check ALL 5 table items
- ✅ Only take 70%+ strength signals
- ✅ Always use stop loss
- ✅ Always 2:1 reward ratio
- ✅ Risk 1-2% per trade
- ✅ Close all trades by 10 PM
- ✅ Journal every trade
- ✅ Follow the plan
**DON'T:**
- ❌ Trade outside 6:30-10 PM IST
- ❌ Enter before candle closes
- ❌ Take weak signals (below 70%)
- ❌ Trade without stop loss
- ❌ Move stop loss (lock in loss)
- ❌ Hold overnight
- ❌ Revenge trade after losses
- ❌ Overleverge (more than 0.1 lot start)
- ❌ Skip journaling
- ❌ Deviate from plan
***
### THE 5-STEP ENTRY PROCESS:
**Step 1:** Arrow appears on chart ➜
**Step 2:** Wait for candle to close ➜
**Step 3:** Check table (all 5 items) ➜
**Step 4:** If all good = go to prop firm ➜
**Step 5:** Enter trade with SL & TP
Takes 30 seconds once you practice!
***
### MONEY MATH (Starting with $5,000):
**If you take 20 signals per month:**
- Win 15, Lose 5 (75% rate)
- Wins: 15 × $100 = $1,500
- Losses: 5 × $50 = -$250
- Net: +$1,250/month = 25% return
**Month 2:** $5,000 + $1,250 = $6,250 account
**Month 3:** $6,250 + $1,562 = $7,812 account
**Month 4:** $7,812 + $1,953 = $9,765 account
**Month 5:** $9,765 + $2,441 = $12,206 account
**Month 6:** $12,206 + $3,051 = $15,257 account
**In 6 months = $10,000 account → $15,000+ (50% growth)**
That's COMPOUNDING, baby! 💰
***
### START TODAY:
1. Copy indicator code
2. Add to 1-hour US30 chart on TradingView
3. Wait until 6:30 PM IST tonight (or tomorrow if late)
4. Watch for signals
5. Follow the rules
6. Trade your prop firm
**That's it! Simple as that!**
***
### FINAL WORDS:
This isn't get-rich-quick. This is build-wealth-steadily.
You follow the plan, take quality signals only, manage risk properly, you WILL make money. Not every trade wins, but the winners are bigger than losers (2:1 ratio).
Most traders fail because they:
- Trade too much (overtrading)
- Don't follow their plan (emotions)
- Risk too much per trade (blown account)
- Chase signals (FOMO)
- Don't journal (repeat mistakes)
You avoid those 5 things = you'll be ahead of 95% of traders.
**Start trading 6:30 PM IST. Let's go! 🚀**
EMA Bounce · CCI + MACD Filters - By author (PDK1977)3 EMA Bounce – Dual-Stack Edition by PDK1977
Script is inspired by this youtube strategy by Trading DNA 
www.youtube.com
A price-action tool that spots “kiss-and-rebound” moves off fast / mid / slow EMAs, with separate buy- and sell-stacks.
Signals are cleared through CCI and MACD filters for confidence, an optional slow-EMA trend filter, and a spacing rule to reduce noise.
Plots 3 or 6 color-coded EMAs directly on the chart (if buy and sell is equal only 3 lines) and paints compact BULL (lime) / BEAR (red) triangles at qualifying bars for buy and sell.
ADJUST EMA as explained in the video for YOUR choosen assets and learn to use EMA correct on each assets.
Disclaimer: this script is provided strictly for educational purposes; the author accepts no liability for any trading decisions made with it.
Have fun!
Best regard Patrick
BigBallsCalculate normalized volume based on StdDev of volume over 200 bars and show volume as a circle on candles. 
Sometimes useful for "follow through".
MA strategyBuy / sell on MA cross. Use ATR or Swing for stop
Option for moving stop after second SwL / SwH
Knock yourself out modifying.
MA 44 moving averages. 
There is nothing more to it, but I have to write this otherwise TV wont let me publish.
RSI Candle 12-Band SpectrumExperience RSI like never before. This multi-band visualizer transforms relative strength into a living color map — directly over price action — revealing momentum shifts long before traditional RSI signals.
🔹 12 Dynamic RSI Bands – A full emotional spectrum from oversold to overbought, colored from deep blue to burning red.
🔹 Adaptive Pulse System – Highlights every shift in RSI state with an intelligent fade-out pulse that measures the strength of each rotation.
🔹 Precision Legend Display – Clear RSI cutoff zones with user-defined thresholds and color ranges.
🔹 Multi-Timeframe Engine – Optionally view higher-timeframe RSI context while scalping lower frames.
🔹 Stealth Mode – Borders-only visualization for minimal chart impact on dark themes.
🔹 Complete Customization – Adjustable band levels, color palettes, and fade behavior.
🧠 Designed for professional traders who move with rhythm, not randomness.
Distance from Anchored VWAPjust a simple script allowing you to drop anchored vwap from a daily event ie earnings release, breaking news etc.  Calculates distance from anchored vwap to also give you an idea on extension away from move for pull ins or pull backs
Clean Market Structures This indicator marks out the highs and lows on the chart, allowing traders to easily follow the market structure and identify potential liquidity zones.
 Highs  are plotted when an up candle is followed by a down candle, marking the highest wick of that two-candle formation.
 Lows  are plotted when a down candle is followed by an up candle, marking the lowest wick of that two-candle formation.
These levels often act as  liquidity  pools, since  liquidity  typically rests above  previous highs  and below  previous lows .
By highlighting these areas, the indicator helps traders visualize where price may seek liquidity and react, making it useful for structure-based and liquidity-driven trading strategies.






















