NFCI National Financial Conditions IndexChicago Fed National Financial Conditions Index (NFCI)
This indicator plots the Chicago Fed’s National Financial Conditions Index (NFCI).
The NFCI updates weekly, and its latest value is displayed across all chart intervals.
The NFCI measures how tight or loose overall U.S. financial conditions are. It combines over 100 weekly indicators from the money, bond, and equity markets—along with credit and leverage data—into a single composite index.
The NFCI has three key subcomponents, each of which can be independently selected within the indicator:
Risk: Captures volatility, credit spreads, and overall market stress.
Credit: Tracks how easy or difficult it is to borrow across households and businesses.
Leverage: Reflects the level of debt and balance-sheet strength in the financial system.
When the NFCI rises, financial conditions are tightening — liquidity is contracting, borrowing costs are climbing, and investors tend to reduce risk.
When the NFCI falls, conditions are loosening — liquidity expands, credit flows more freely, and markets generally become more risk-seeking.
Traders often use the NFCI as a macro backdrop for risk appetite: rising values signal growing stress and defensive positioning, while falling values indicate improving liquidity and a more supportive market environment.
Statistics
Rolling Correlation vs Another Symbol (SPY Default)This indicator visualizes the rolling correlation between the current chart symbol and another selected asset, helping traders understand how closely the two move together over time.
It calculates the Pearson correlation coefficient over a user-defined period (default 22 bars) and plots it as a color-coded line:
• Green line → positive correlation (move in the same direction)
• Red line → negative correlation (move in opposite directions)
• A gray dashed line marks the zero level (no correlation).
The background highlights periods of strong relationship:
• Light green when correlation > +0.7 (strong positive)
• Light red when correlation < –0.7 (strong negative)
Use this tool to quickly spot diversification opportunities, confirm hedges, or understand how assets interact during different market regimes.
Standardization (Z-score)Standardization, often referred to as Z-score normalization, is a data preprocessing technique that rescales data to have a mean of 0 and a standard deviation of 1. The resulting values, known as Z-scores, indicate how many standard deviations an individual data point is from the mean of the dataset (or a rolling sample of it).
This indicator calculates and plots the Z-score for a given input series over a specified lookback period. It is a fundamental tool for statistical analysis, outlier detection, and preparing data for certain machine learning algorithms.
## Core Concepts
* **Standardization:** The process of transforming data to fit a standard normal distribution (or more generally, to have a mean of 0 and standard deviation of 1).
* **Z-score (Standard Score):** A dimensionless quantity that represents the number of standard deviations by which a data point deviates from the mean of its sample.
The formula for a Z-score is:
`Z = (x - μ) / σ`
Where:
* `x` is the individual data point (e.g., current value of the source series).
* `μ` (mu) is the mean of the sample (calculated over the lookback period).
* `σ` (sigma) is the standard deviation of the sample (calculated over the lookback period).
* **Mean (μ):** The average value of the data points in the sample.
* **Standard Deviation (σ):** A measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean, while a high standard deviation indicates that the values are spread out over a wider range.
## Common Settings and Parameters
| Parameter | Type | Default | Function | When to Adjust |
| :-------------- | :----------- | :------ | :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Source | series float | close | The input data series (e.g., price, volume, indicator values). | Choose the series you want to standardize. |
| Lookback Period | int | 20 | The number of bars (sample size) used for calculating the mean (μ) and standard deviation (σ). Min 2. | A larger period provides more stable estimates of μ and σ but will be less responsive to recent changes. A shorter period is more reactive. `minval` is 2 because `ta.stdev` requires it. |
**Pro Tip:** Z-scores are excellent for identifying anomalies or extreme values. For instance, applying Standardization to trading volume can help quickly spot days with unusually high or low activity relative to the recent norm (e.g., Z-score > 2 or < -2).
## Calculation and Mathematical Foundation
The Z-score is calculated for each bar as follows, using a rolling window defined by the `Lookback Period`:
1. **Calculate Mean (μ):** The simple moving average (`ta.sma`) of the `Source` data over the specified `Lookback Period` is calculated. This serves as the sample mean `μ`.
`μ = ta.sma(Source, Lookback Period)`
2. **Calculate Standard Deviation (σ):** The standard deviation (`ta.stdev`) of the `Source` data over the same `Lookback Period` is calculated. This serves as the sample standard deviation `σ`.
`σ = ta.stdev(Source, Lookback Period)`
3. **Calculate Z-score:**
* If `σ > 0`: The Z-score is calculated using the formula:
`Z = (Current Source Value - μ) / σ`
* If `σ = 0`: This implies all values in the lookback window are identical (and equal to the mean). In this case, the Z-score is defined as 0, as the current source value is also equal to the mean.
* If `σ` is `na` (e.g., insufficient data in the lookback period), the Z-score is `na`.
> 🔍 **Technical Note:**
> * The `Lookback Period` must be at least 2 for `ta.stdev` to compute a valid standard deviation.
> * The Z-score calculation uses the sample mean and sample standard deviation from the rolling lookback window.
## Interpreting the Z-score
* **Magnitude and Sign:**
* A Z-score of **0** means the data point is identical to the sample mean.
* A **positive Z-score** indicates the data point is above the sample mean. For example, Z = 1 means the point is 1 standard deviation above the mean.
* A **negative Z-score** indicates the data point is below the sample mean. For example, Z = -1 means the point is 1 standard deviation below the mean.
* **Typical Range:** For data that is approximately normally distributed (bell-shaped curve):
* About 68% of Z-scores fall between -1 and +1.
* About 95% of Z-scores fall between -2 and +2.
* About 99.7% of Z-scores fall between -3 and +3.
* **Outlier Detection:** Z-scores significantly outside the -2 to +2 range, and especially outside -3 to +3, are often considered outliers or extreme values relative to the recent historical data in the lookback window.
* **Volatility Indication:** When applied to price, large absolute Z-scores can indicate moments of high volatility or significant deviation from the recent price trend.
The indicator plots horizontal lines at ±1, ±2, and ±3 standard deviations to help visualize these common thresholds.
## Common Applications
1. **Outlier Detection:** Identifying data points that are unusual or extreme compared to the rest of the sample. This is a primary use in financial markets for spotting abnormal price moves, volume spikes, etc.
2. **Comparative Analysis:** Allows for comparison of scores from different distributions that might have different means and standard deviations. For example, comparing the Z-score of returns for two different assets.
3. **Feature Scaling in Machine Learning:** Standardizing features to have a mean of 0 and standard deviation of 1 is a common preprocessing step for many machine learning algorithms (e.g., SVMs, logistic regression, neural networks) to improve performance and convergence.
4. **Creating Normalized Oscillators:** The Z-score itself can be used as a bounded (though not strictly between -1 and +1) oscillator, indicating how far the current price has deviated from its moving average in terms of standard deviations.
5. **Statistical Process Control:** Used in quality control charts to monitor if a process is within expected statistical limits.
## Limitations and Considerations
* **Assumption of Normality for Probabilistic Interpretation:** While Z-scores can always be calculated, the probabilistic interpretations (e.g., "68% of data within ±1σ") strictly apply to normally distributed data. Financial data is often not perfectly normal (e.g., it can have fat tails).
* **Sensitivity of Mean and Standard Deviation to Outliers:** The sample mean (μ) and standard deviation (σ) used in the Z-score calculation can themselves be influenced by extreme outliers within the lookback period. This can sometimes mask or exaggerate the Z-score of other points.
* **Choice of Lookback Period:** The Z-score is highly dependent on the `Lookback Period`. A short period makes it very sensitive to recent fluctuations, while a long period makes it smoother and less responsive. The appropriate period depends on the analytical goal.
* **Stationarity:** For time series data, Z-scores are calculated based on a rolling window. This implicitly assumes some level of local stationarity (i.e., the mean and standard deviation are relatively stable within the window).
Multi-Session Viewer and AnalyzerFully customizable multi-session viewer that takes session analysis to the next level. It allows you to fully customize each session to your liking. Includes a feature that highlights certain periods of time on the chart and a Time Range Marker.
It helps you analyze the instrument that you trade and pinpoint which times are more volatile than others. It also helps you choose the best time to trade your instrument and align your life schedule with the market.
NZDUSD Example:
- 3 major sessions displayed.
- Although this is NZDUSD, Sydney is not the best time to trade this pair. Volatility picks up at Tokyo open.
- I have time to trade in the evening from 18:00 to 22:00 PST. I live in a different time zone, whereas market is based on EST. How does the pair behave during the time I am available to trade based on my time zone? Time Range Marker feature allows you to see this clearly on the chart (black lines).
- I have some time in the morning to trade during New York session, but there is no way I am waking up at 05:00 PST. 06:30 PST seems doable. Blue highlighted area is good time to trade during New York session based on what Bob said. It seem like this aligns with when I am available and when I am able to trade. Volatility is also at its peak.
- I am also available to trade between London close and Tokyo open on some days of the week, but... based on what I see, green highlighted area is clearly showing that I probably don't want to waste my time trading this pair from London close and until Tokyo open. I will use this time for something else rather than be stuck in a range.
Forex Dynamic Lot Size CalculatorForex Dynamic Lot Size Calculator for Forex. Works on USD Base and USD Quote pairs. Provides real-time data based on stop-loss location. Allows you to know in real-time how the number of lots you need to purchase to match your risk %.
Number of Lots is calculated based on total risk. Total risk is calculated based on Stop-Loss + Commission + Spread Fees + Slippage measured in pips. Also includes data such as break-even pips, net take profit, margin required, buying power used, and a few others. All are real-time and anchored to the current price.
The intention of creating this indicator is to help with risk management. You know exactly how many lots you need to get this very moment to have your total risk at lets say $250, which includes commission fees, spread fees, and slippage.
To put it simply, if I was to enter the trade right now and willing to risk exactly $250, how many lots will I need to get right this second?
---
- To use adjust Account Settings along with other variables.
- Stop Loss Mode can be Manual or Dynamic. If you select Dynamic, then you will have to adjust Stop Loss Level to where you can see the reference line on the screen. It is at 1.1 by default. Just enter current price and the line will appear. Adjust it by dragging it to where you want your stop loss to be.
- Take Profit Mode can also be Manual or Dynamic. I just keep my TP at Manual and use Quick Access to set Quick RR levels.
- Adjust Spreads and Slippage to your liking. I tried to have TV calculate current spread, but it seem like it doesn't have access to real-life data for me like MT5 does. I just use average instead. Both are optional, depending on your broker and type of account you use.
- Pip Value for the current pair, Return on Margin, and Break-even line can be turned on and off, based on your needs. I just get the Break-even value in pips from the pannel and use that as reference where I need to relocate my stop loss to break-ever (commission + spreds + slippage).
- Panel is fully customizable based on your liking. Important fields are highlighted along with reference lines.
Risk Leverage ToolRisk Leverage Tool – Calculate Position Size and Required Leverage
This script automatically calculates the optimal position size and the leverage needed based on the amount of capital you are willing to risk on a trade. It is designed for traders who want precise control over their risk management.
The script determines the distance between the entry and stop-loss price, calculates the maximum position size that fits within the defined risk, and derives the notional value of the trade. Based on the available margin, it then calculates the required leverage. It also displays the percentage of margin at risk if the stop-loss is hit.
All results are displayed in a table in the top-right corner of the chart. Additionally, a label appears at the entry price level showing the same data.
To use the tool, simply input your planned entry price, stop-loss price, the maximum risk amount in dollars, and the available margin in the settings menu. The script will update all values automatically in real time.
This tool works with any market where capital risk is expressed in absolute terms (such as USD), including futures, CFDs, and leveraged spot positions. For inverse contracts or percentage-based stops, manual adjustment is required.
Adaptive Trend SelectorThe Adaptive Trend Selector is a comprehensive trend-following tool designed to automatically identify the optimal moving average crossover strategy. It features adjustable parameters and an integrated backtester that delivers institutional-grade insights into the recommended strategy. The model continuously adapts to new data in real time by evaluating multiple moving average combinations, determining the best performing lengths, and presenting the backtest results in a clear, color-coded table that benchmarks performance against the buy-and-hold strategy.
At its core, the model systematically backtests a wide range of moving average combinations to identify the configuration that maximizes the selected optimization metric. Users can choose to optimize for absolute returns or risk-adjusted returns using the Sharpe, Sortino, or Calmar ratios. Alternatively, users can enable manual optimization to test custom fast and slow moving average lengths and view the corresponding backtest results. The label displays the Compounded Annual Growth Rate (CAGR) of the strategy, with the buy-and-hold CAGR in parentheses for comparison. The table presents the backtest results based on the fast and slow lengths displayed at the top:
Sharpe = CAGR per unit of standard deviation.
Sortino = CAGR per unit of downside deviation.
Calmar = CAGR relative to maximum drawdown.
Max DD = Largest peak-to-trough decline in value.
Beta (β) = Return sensitivity relative to buy-and-hold.
Alpha (α) = Excess annualized risk-adjusted returns.
Win Rate = Ratio of profitable trades to total trades.
Profit Factor = Total gross profit per unit of losses.
Expectancy = Average expected return per trade.
Trades/Year = Average number of trades per year.
This indicator is designed with flexibility in mind, enabling users to specify the start date of the backtesting period and the preferred moving average strategy. Supported strategies include the Exponential Moving Average (EMA), Simple Moving Average (SMA), Wilder’s Moving Average (RMA), Weighted Moving Average (WMA), and Volume-Weighted Moving Average (VWMA). To minimize overfitting, users can define constraints such as a minimum and maximum number of trades per year, as well as an optional optimization margin that prioritizes longer, more robust combinations by requiring shorter-length strategies to exceed this threshold. The table follows an intuitive color logic that enables quick performance comparison against buy-and-hold (B&H):
Sharpe = Green indicates better than B&H, while red indicates worse.
Sortino = Green indicates better than B&H, while red indicates worse.
Calmar = Green indicates better than B&H, while red indicates worse.
Max DD = Green indicates better than B&H, while red indicates worse.
Beta (β) = Green indicates better than B&H, while red indicates worse.
Alpha (α) = Green indicates above 0%, while red indicates below 0%.
Win Rate = Green indicates above 50%, while red indicates below 50%.
Profit Factor = Green indicates above 2, while red indicates below 1.
Expectancy = Green indicates above 0%, while red indicates below 0%.
In summary, the Adaptive Trend Selector is a powerful tool designed to help investors make data-driven decisions when selecting moving average crossover strategies. By optimizing for risk-adjusted returns, investors can confidently identify the best lengths using institutional-grade metrics. While results are based on the selected historical period, users should be mindful of potential overfitting, as past results may not persist under future market conditions. Since the model recalibrates to incorporate new data, the recommended lengths may evolve over time.
Mercury Retrograde — Daily boxes & bottom % (stable v6)水星逆行のアノマリー検証。対象は日経225の過去5年の値動き。水星逆行開始時の終値と水星逆行終了時の終値を比較。上昇率・下落率に応じて色分け。
Verification of Mercury Retrograde Anomalies. Subject: Nikkei 225 price movements over the past five years. Comparing closing prices at the start and end of Mercury retrograde periods. Color-coded based on percentage increase/decrease.
Lump Sum Favorability (SPX & NDX)This indicator provides a visual dashboard to gauge the statistical favorability of deploying a "Lump Sum" investment into the SPX (S&P 500) or NDX (Nasdaq 100).
The primary goal is not to time the exact market bottom, but to identify zones of significant pessimism or euphoria. Historically, periods of indiscriminate selling have represented high-probability entry points for long-term investors.
The dashboard consists of two parts:
1. The Favorability Gauge: A 12-segment gauge that moves from Red (Unfavorable) to Teal (Favorable).
2. The Summary Text: An optional text box (enabled in settings) that provides a plain-English summary of the current market breadth.
---
The Method: Market Breadth
This indicator is not based on the price of the index itself. Price-based indicators (like an RSI on the SPX) can be misleading. In a market-cap-weighted index, a few mega-cap stocks can hold the index price up while the vast majority of "average" stocks are already in a deep bear market.
This tool uses Market Breadth to measure the true, underlying health and participation of the entire market.
How It Works
1. Data Source: The indicator pulls the daily percentage of companies within the selected index (SPX or NDX) that are trading above their 200-day moving average. (Data tickers: S5TH for SPX, NDTH for NDX).
2. Smoothing: This raw data is volatile. To filter out daily noise and confirm a persistent trend, the indicator calculates a 5-day Simple Moving Average (SMA) of this percentage. This is the value used by the indicator.
3. Interpretation:
High Value (>= 50%): More than half of the stocks are above their long-term average. This signifies the market is "Overheated" or in a risk-on phase. The favorability for a new lump sum investment is considered Low.
Low Value (< 50%): Less than half of the stocks are above their long-term average. This signifies "Oversold" conditions or capitulation. These moments historically offer the best favorability for starting a new long-term investment.
---
How to Use the Indicator
1. The Favorability Gauge
The gauge is designed to be intuitive: Red means "Stop/Caution," and Teal means "Go/Opportunity."
Note: The gauge's logic is inverted from the data value to achieve this simplicity.
Red Zone (Left): UNFAVORABLE
This corresponds to a high percentage of stocks being above their 200d MA (>= 50%). The market is considered Overheated, and the favorability for a new lump sum investment is low.
Teal Zone (Right): FAVORABLE
This corresponds to a low percentage of stocks being above their 200d MA (< 50%). The market is considered Oversold, and the favorability for a new lump sum investment is high.
2. The Summary Text
When "Show Summary Text" is enabled in the settings, a box will appear at the top-center of your chart. This box provides a clear, data-driven summary, such as:
"Currently, only 22% of S&P 500 companies are above their 200-day MA. Market is Oversold."
The color of this text will automatically change to match the market state (Red for Overheated, Teal for Oversold), providing instant confirmation of the gauge's reading.
---
Settings
Market: Choose the index to analyze: SPX (S&P 500) or NDX (Nasdaq 100).
Gauge Position: Select where the gauge dashboard should appear on your chart (default is Bottom Right).
Show Summary Text: Toggle the descriptive text box on or off (default is On).
---
This indicator is a statistical and historical guide, not a financial advice or timing signal. It is designed to measure favorability based on past market behavior, not to provide certainty.
Extreme oversold conditions can persist, and markets can always go lower. This tool should be used as one component of a broader investment and risk-management framework. Past performance is not a guarantee of future results.
GARCH Range PredictorThis was inspired by deltatrendtrading's video on GARCH models to predict daily trading ranges and identify favorable trading conditions. Based on advanced volatility forecasting techniques, it predicts whether a trading day's true range will exceed a threshold, helping traders decide when to trade or skip a session.
Key Features
GARCH(1,1) Volatility Modeling: Uses log-transformed true ranges with exponential moving average centering
Forward-Looking Predictions: Makes predictions at session start before the day unfolds
Dynamic or Static Thresholds: Choose between fixed dollar thresholds or adaptive 20-day averages
Accuracy Tracking: Monitors prediction accuracy with overall and recent (20-day) hit rates
Visual Session Boxes: Colors trading sessions green (trade) or red (skip) based on predictions
Real-Time Statistics: Displays current predictions, thresholds, and performance metrics
How It Works
Data Transformation: Log-transforms daily true ranges and centers them using an EMA
Variance Modeling: Updates GARCH variance using: σ²ₜ = ω + α(residual²) + β(σ²ₜ₋₁)
Prediction Generation: Back-transforms log predictions to dollar values
Signal Generation: Compares predictions to threshold to generate trade/skip signals
Performance Tracking: Validates predictions against actual outcomes
Parameters
GARCH Parameters (ω, α, β): Control volatility persistence and mean reversion
EMA Period: Smoothing period for log range centering
Threshold Settings: Static dollar amount or dynamic multiplier of recent averages
Session Time: Define regular trading hours for analysis
Best Use Cases
Breakout and momentum strategies that perform better on high-range days
Risk management by avoiding low-volatility sessions
Futures day trading (optimized for MNQ/NQ detection)
Any strategy where daily range impacts profitability
Important Notes
Requires 5+ sessions for initialization and warm-up
Accuracy depends heavily on proper parameter tuning for your specific instrument
Default parameters may need adjustment for different markets
Monitor the hit rate to validate effectiveness on your timeframe
RBLR - GSK Vizag AP IndiaThis indicator identifies the Opening Range High (ORH) and Low (ORL) based on the first 15 minutes of the Indian equity market session (9:15 AM to 9:30 AM IST). It draws horizontal lines extending these levels until market close (3:30 PM IST) and generates visual signals for price breakouts above ORH or below ORL, as well as reversals back into the range.
Key features:
- **Range Calculation**: Captures the high and low during the opening period using real-time bar data.
- **Line Extension**: Lines are dynamically extended bar-by-bar within the session for clear visualization.
- **Signals**:
- Green triangle up: Crossover above ORH (potential bullish breakout).
- Red triangle down: Crossunder below ORL (potential bearish breakout).
- Yellow labels: Reversals from breakout levels back into the range.
- **Labels**: "RAM BAAN" marks the ORH (inspired by a precise arrow from the Ramayana), and "LAKSHMAN REKHA" marks the ORL (inspired by a protective boundary line from the same epic).
- **Customization**: Toggle signals on/off and select line styles (Dotted, Dashed, Solid, or Smoothed, with transparency for Smoothed).
The state-tracking logic prevents redundant signals by monitoring if price remains outside the range after a breakout. This helps users observe range-bound behavior or directional moves without built-in alerts. This indicator is particularly useful for day trading on longer intraday timeframes (e.g., 15-minute charts) to identify session-wide trends and avoid noise in shorter frames. For best results, apply on intraday timeframes on NSE/BSE symbols. Note that lines and labels are limited to the script's max counts to avoid performance issues on long histories.
**Disclaimer**: This indicator is for educational and informational purposes only and does not constitute financial, investment, or trading advice. Trading in financial markets involves significant risk of loss and is not suitable for all investors. Past performance is not indicative of future results. Users should conduct their own research, consider their financial situation, and consult with qualified professionals before making any investment decisions. The author and TradingView assume no liability for any losses incurred from its use.
Liquidity Stress Index (SOFR - IORB)How to use:
> +10 bps — TIGHT
−5 +10 bps — NEUTRAL
< −5 bps — LOOSE
PG ATM Strike Line with Call & Put PremiumsPine Script: ATM Strike Line with Call & Put Premiums (Simplified)This Pine Script for TradingView displays the At-The-Money (ATM) strike price, futures price, call/put premiums (time value), and two ratios—Premium Ratio (PR) and Volume Ratio (VR)—for a user-selected underlying asset (e.g., NIFTY, BANKNIFTY, or stocks). It helps traders gauge near-term market direction using options data.How the Script WorksInputs:Expiry: Select year (e.g., '25), month (01–12), day (01–31) for option expiry (e.g., '251028').
Timeframe: Choose data timeframe (e.g., Daily, 15-min).
Symbol: Auto-detects chart symbol or select from Indian indices/stocks.
Strike: Auto-ATM (based on futures) or manual strike input.
Interval: Auto (e.g., 100 for NIFTY) or custom strike interval.
Colors: Customizable for ATM line, labels (Futures Price, CPR, PPR, VR, PR).
Calculations:Futures Price (FP): Fetches front-month futures price (e.g., NSE:NIFTY1!).
ATM Strike: Rounds futures price to nearest strike interval.
Option Data: Retrieves Last Traded Price (LTP) and volume for ATM call/put options (e.g., NSE:NIFTY251028C24200).
Call Premium (CPR): Call LTP minus intrinsic value (max(0, FP - Strike)).
Put Premium (PPR): Put LTP minus intrinsic value (max(0, Strike - FP)).
Premium Ratio (PR): PPR / CPR.
Volume Ratio (VR): Put Volume / Call Volume.
Visuals:Draws ATM strike line on chart.
Displays labels: FP (futures price), CPR (call premium), PPR (put premium), VR, PR.
VR/PR labels: Red (≥ 1.25, bearish), Green (≤ 0.75, bullish), Gray (0.75–1.25, neutral).
Updates on last confirmed bar to avoid redraws.
Using PR and VR for Market DirectionPremium Ratio (PR):PR ≥ 1.25 (Red): High put premiums suggest bearish sentiment (expect price drop).
PR ≤ 0.75 (Green): High call premiums suggest bullish sentiment (expect price rise).
0.75 < PR < 1.25 (Gray): Neutral, no clear direction.
Use: High PR favors bearish trades (e.g., buy puts); low PR favors bullish trades (e.g., buy calls).
Volume Ratio (VR):VR ≥ 1.25 (Red): High put volume indicates bearish activity.
VR ≤ 0.75 (Green): High call volume indicates bullish activity.
0.75 < VR < 1.25 (Gray): Neutral trading activity.
Use: High VR suggests bearish moves; low VR suggests bullish moves.
Combined Signals:High PR & VR: Strong bearish signal; consider put buying or call selling.
Low PR & VR: Strong bullish signal; consider call buying or put selling.
Mixed/Neutral: Use price action or support/resistance for confirmation.
Tips:Combine with technical analysis (e.g., trends, levels).
Match timeframe to trading horizon (e.g., 15-min for intraday).
Monitor FP for context; check volatility or news for accuracy.
ExampleNIFTY: FP = 24,237.50, ATM = 24,200, CPR = 120.25, PPR = 180.50, PR = 1.50 (Red), VR = 1.30 (Red).
Insight: High PR/VR suggests bearish bias; consider bearish trades if price nears resistance.
Action: Buy puts or exit longs, confirm with price action.
Conclusion: This script provides a concise tool for options traders, showing ATM strike, premiums, and PR/VR ratios. High PR/VR (≥ 1.25) signals bearish sentiment, low PR/VR (≤ 0.75) signals bullish sentiment, and neutral (0.75–1.25) suggests indecision. Combine with technical analysis for robust trading decisions in the Indian options market.
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
Kalman VWAP Filter [BackQuant]Kalman VWAP Filter
A precision-engineered price estimator that fuses Kalman filtering with the Volume-Weighted Average Price (VWAP) to create a smooth, adaptive representation of fair value. This hybrid model intelligently balances responsiveness and stability, tracking trend shifts with minimal noise while maintaining a statistically grounded link to volume distribution.
If you would like to see my original Kalman Filter, please find it here:
Concept overview
The Kalman VWAP Filter is built on two core ideas from quantitative finance and control theory:
Kalman filtering — a recursive Bayesian estimator used to infer the true underlying state of a noisy system (in this case, fair price).
VWAP anchoring — a dynamic reference that weights price by traded volume, representing where the majority of transactions have occurred.
By merging these concepts, the filter produces a line that behaves like a "smart moving average": smooth when noise is high, fast when markets trend, and self-adjusting based on both market structure and user-defined noise parameters.
How it works
Measurement blend : Combines the chosen Price Source (e.g., close or hlc3) with either a Session VWAP or a Rolling VWAP baseline. The VWAP Weight input controls how much the filter trusts traded volume versus price movement.
Kalman recursion : Each bar updates an internal "state estimate" using the Kalman gain, which determines how much to trust new observations vs. the prior state.
Noise parameters :
Process Noise controls agility — higher values make the filter more responsive but also more volatile.
Measurement Noise controls smoothness — higher values make it steadier but slower to adapt.
Filter order (N) : Defines how many parallel state estimates are used. Larger orders yield smoother output by layering multiple one-dimensional Kalman passes.
Final output : A refined price trajectory that captures VWAP-adjusted fair value while dynamically adjusting to real-time volatility and order flow.
Why this matters
Most smoothing techniques (EMA, SMA, Hull) trade off lag for smoothness. Kalman filtering, however, adaptively rebalances that tradeoff each bar using probabilistic weighting, allowing it to follow market state changes more efficiently. Anchoring it to VWAP integrates microstructure context — capturing where liquidity truly lies rather than only where price moves.
Use cases
Trend tracking : Color-coded candle painting highlights shifts in slope direction, revealing early trend transitions.
Fair value mapping : The line represents a continuously updated equilibrium price between raw price action and VWAP flow.
Adaptive moving average replacement : Outperforms static MAs in variable volatility regimes by self-adjusting smoothness.
Execution & reversion logic : When price diverges from the Kalman VWAP, it may indicate short-term imbalance or overextension relative to volume-adjusted fair value.
Cross-signal framework : Use with standard VWAP or other filters to identify convergence or divergence between liquidity-weighted and state-estimated prices.
Parameter guidance
Process Noise : 0.01–0.05 for swing traders, 0.1–0.2 for intraday scalping.
Measurement Noise : 2–5 for normal use, 8+ for very smooth tracking.
VWAP Weight : 0.2–0.4 balances both price and VWAP influence; 1.0 locks output directly to VWAP dynamics.
Filter Order (N) : 3–5 for reactive short-term filters; 8–10 for smoother institutional-style baselines.
Interpretation
When price > Kalman VWAP and slope is positive → bullish pressure; buyers dominate above fair value.
When price < Kalman VWAP and slope is negative → bearish pressure; sellers dominate below fair value.
Convergence of price and Kalman VWAP often signals equilibrium; strong divergence suggests imbalance.
Crosses between Kalman VWAP and the base VWAP can hint at shifts in short-term vs. long-term liquidity control.
Summary
The Kalman VWAP Filter blends statistical estimation with market microstructure awareness, offering a refined alternative to static smoothing indicators. It adapts in real time to volatility and order flow, helping traders visualize balance, transition, and momentum through a lens of probabilistic fair value rather than simple price averaging.
Trading Toolkit - Comprehensive AnalysisTrading Toolkit – Comprehensive Analysis
A unified trading analysis toolkit with four sections:
📊 Company Info
Fundamentals, market cap, sector, and earnings countdown.
📅 Performance
Date‑range analysis with key metrics.
🎯 Market Sentiment
CNN‑style Fear & Greed Index (7 components) + 150‑SMA positioning.
🛡️ Risk Levels
ATR/MAD‑based stop‑loss and take‑profit calculations.
Key Features
CNN‑style Fear & Greed approximation using:
Momentum: S&P 500 vs 125‑DMA
Price Strength: NYSE 52‑week highs vs lows
Market Breadth: McClellan Volume Summation (Up/Down volume)
Put/Call Ratio: 5‑day average (inverted)
Volatility: VIX vs 50‑DMA (inverted)
Safe‑Haven Demand: 20‑day SPY–IEF return spread
Junk‑Bond Demand: HY vs IG credit spread (inverted)
Normalization: z‑score → percentile (0–100) with ±3 clipping.
CNN‑aligned thresholds:
Extreme Fear: 0–24 | Fear: 25–44 | Neutral: 45–54 | Greed: 55–74 | Extreme Greed: 75+.
Risk tools: ATR & MAD volatility measures with configurable multipliers.
Flexible layout: vertical or side‑by‑side columns.
Data Sources
S&P 500: CBOE:SPX or AMEX:SPY
NYSE: INDEX:HIGN, INDEX:LOWN, USI:UVOL, USI:DVOL
Options: USI:PCC (Total PCR), fallback INDEX:CPCS (Equity PCR)
Volatility: CBOE:VIX
Treasuries: NASDAQ:IEF
Credit Spreads: FRED:BAMLH0A0HYM2, FRED:BAMLC0A0CM
Risk Management
ATR risk bands: 🟢 ≤3%, 🟡 3–6%, ⚪ 6–10%, 🟠 10–15%, 🔴 >15%
MAD‑based stop‑loss and take‑profit calculations.
Author: Daniel Dahan
(AI Generated, Merged & enhanced version with CNN‑style Fear & Greed)
Trading Lot & Margin Calculator
# 💹 Trading Lot & Margin Calculator - Professional Risk Management Tool
## 🎯 Overview
A comprehensive, all-in-one calculator dashboard that helps traders determine optimal position sizes, calculate margin requirements, and manage risk effectively across multiple asset classes. This indicator displays directly on your chart as a customizable table, providing real-time calculations based on current market prices.
## ✨ Key Features
### 📊 Three Powerful Calculation Modes:
**1. Calculate Lot Size (Risk-Based Position Sizing)**
- Input your risk percentage and stop loss in pips
- Automatically calculates the optimal lot size for your risk tolerance
- Respects margin limitations (configurable margin % cap)
- Ensures positions don't exceed minimum lot size (0.01)
- Perfect for risk management and proper position sizing
**2. Calculate Margin Cost**
- Input desired lot size
- See exactly how much margin is required
- Shows percentage of deposit used
- Displays free margin remaining
- Warns when insufficient funds
**3. Margin to Lots**
- Specify a fixed margin amount you want to use
- Calculator shows how many lots/contracts you can buy
- Ideal for traders with fixed margin budgets
## 🤖 Auto-Detection of Instruments
The calculator **automatically detects** what you're trading and adjusts calculations accordingly:
### ✅ Fully Supported:
- **💱 Forex Pairs** - All majors, minors, exotics (EURUSD, GBPJPY, etc.)
- Standard lot: 100,000 units
- JPY pairs: 0.01 pip size, others: 0.0001
- **🛢️ Commodities** - Gold, Silver, Oil
- XAUUSD (Gold): 100 oz per lot
- XAGUSD (Silver): 5,000 oz per lot
- Oil (WTI/Brent): 1,000 barrels per lot
- **📈 Indices** - US500, NAS100, US30, DAX, etc.
- Correct contract sizes per point
- **📊 Stocks** - All individual stocks
- 1 lot = 1 share
- Direct share calculations
### ⚠️ Known Limitation:
- **₿ Crypto calculations may not work properly** on all crypto pairs. Use manual contract size if needed.
## 📋 Dashboard Information Displayed:
- 🎯 Optimal/Requested Lot Size
- 💰 Margin Required
- 📊 Margin % of Deposit
- 💵 Free Margin Remaining
- 💎 Position Value
- 📈 Pip/Point Value
- ⚠️ Safety Warnings (insufficient funds, high risk, etc.)
- 🔍 Detected Instrument Type
- 📦 Contract Size
## ⚙️ Customizable Settings:
**Account Settings:**
- Account Deposit
- Leverage (1:1 to 1:1000)
- Max Margin % of Deposit (default 5% for safety)
**Risk Management:**
- Risk Percentage (for lot size calculation)
- Stop Loss in Pips
- Lot Amount (for margin cost calculation)
- Margin to Use (for margin-to-lots calculation)
**Display Options:**
- Show/Hide Dashboard
- Position: Top/Middle/Bottom, Left/Right
- Auto-detect instrument ON/OFF
- Manual contract size override
## 🎨 Professional Design
- Clean, modern table interface
- Color-coded warnings (red = danger, yellow = caution, green = safe)
- Large, readable text
- Minimal screen space usage
- Non-intrusive overlay
## 💡 Use Cases:
1. **Day Traders** - Quick position sizing based on account risk
2. **Swing Traders** - Calculate optimal positions for longer-term setups
3. **Risk Managers** - Ensure positions stay within margin limits
4. **Beginners** - Learn proper position sizing and risk management
5. **Multi-Asset Traders** - Seamlessly switch between forex, commodities, indices, and stocks
## ⚠️ Important Notes:
- ✅ Works on all timeframes
- ✅ Updates in real-time with price changes
- ✅ Minimum lot size enforced (0.01)
- ✅ Margin calculations use current chart price
- ⚠️ **Crypto calculations may be inaccurate** - verify with your broker
- 📌 Always verify calculations with your broker's specifications
- 📌 Contract sizes may vary by broker
## 🚀 How to Use:
1. Add indicator to any chart
2. Click settings ⚙️ icon
3. Enter your account details (deposit, leverage)
4. Choose calculation mode
5. Input your parameters
6. View optimal lot size and margin requirements on dashboard
## 📈 Perfect For:
- Forex traders managing multiple currency pairs
- Commodity traders (Gold, Silver, Oil)
- Index traders (S&P 500, NASDAQ, etc.)
- Stock traders
- Anyone who wants professional risk management
## 🛡️ Risk Management Features:
- Configurable margin % cap prevents over-leveraging
- Risk-based position sizing protects your account
- Warnings for high risk, insufficient funds, margin limitations
- Prevents positions below minimum lot size
---
**Trade smarter, not harder. Calculate before you trade!** 📊💪
---
## Version Notes:
- Pine Script v6
- Overlay mode for chart display
- No external dependencies
- Lightweight and fast
**Disclaimer:** This calculator is for educational and informational purposes only. Always verify calculations with your broker and trade at your own risk. Past performance does not guarantee future results.
---
Broad Market for Crypto + index# Broad Market Indicator for Crypto
## Overview
The Broad Market Indicator for Crypto helps traders assess the strength and divergence of individual cryptocurrency assets relative to the overall market. By comparing price deviations across multiple assets, this indicator reveals whether a specific coin is moving in sync with or diverging from the broader crypto market trend.
## How It Works
This indicator calculates percentage deviations from simple moving averages (SMA) for both individual assets and an equal-weighted market index. The core methodology:
1. **Deviation Calculation**: For each asset, the indicator measures how far the current price has moved from its SMA over a specified lookback period (default: 24 hours). The deviation is expressed as a percentage: `(Current Price - SMA) / SMA × 100`
2. **Market Index Construction**: An equal-weighted index is built from selected cryptocurrencies (up to 15 assets). The default composition includes major crypto assets: BTC, ETH, BNB, SOL, XRP, ADA, AVAX, LINK, DOGE, and TRX.
3. **Comparative Analysis**: The indicator displays both the current instrument's deviation and the market index deviation on the same panel, making it easy to spot relative strength or weakness.
## Key Features
- **Customizable Asset Selection**: Choose up to 15 different cryptocurrencies to include in your market index
- **Flexible Configuration**: Toggle individual assets on/off for display and index calculation
- **Current Instrument Tracking**: Automatically plots the deviation of whatever chart you're viewing
- **Visual Clarity**: Color-coded lines for easy differentiation between assets, with the market index shown as a filled area
- **Adjustable Lookback Period**: Modify the SMA period to match your trading timeframe
## How to Use
### Identifying Market Divergences
- When the current instrument deviates significantly above the index, it shows relative strength
- When it deviates below, it indicates relative weakness
- Assets clustering around zero suggest neutral market conditions
### Trend Confirmation
- If both the index and your asset are rising together (positive deviation), it confirms a broad market uptrend
- Divergence between asset and index can signal unique fundamental factors or early trend changes
### Entry/Exit Signals
- Extreme deviations from the index may indicate overbought/oversold conditions relative to the market
- Convergence back toward the index line can signal mean reversion opportunities
## Settings
- **Lookback Period**: Adjust the SMA calculation period (default: 24 hours)
- **Asset Configuration**: Select which cryptocurrencies to monitor and include in the index
- **Display Options**: Show/hide individual assets, current instrument, and market index
- **Color Customization**: Personalize colors for better visual analysis
## Best Practices
- Use on higher timeframes (4H, Daily) for more reliable signals
- Combine with volume analysis for confirmation
- Consider fundamental news when assets show extreme divergence
- Adjust the asset basket to match your trading focus (DeFi, L1s, memecoins, etc.)
## Technical Notes
- The indicator uses `request.security()` to fetch data from multiple symbols
- Deviations are calculated independently for each asset
- The zero line represents perfect alignment with the moving average
- Index calculation automatically adjusts based on active assets
## Default Assets
1. BTC (Bitcoin) - BINANCE:BTCUSDT
2. ETH (Ethereum) - BINANCE:ETHUSDT
3. BNB (Binance Coin) - BINANCE:BNBUSDT
4. SOL (Solana) - BINANCE:SOLUSDT
5. XRP (Ripple) - BINANCE:XRPUSDT
6. ADA (Cardano) - BINANCE:ADAUSDT
7. AVAX (Avalanche) - BINANCE:AVAXUSDT
8. LINK (Chainlink) - BINANCE:LINKUSDT
9. DOGE (Dogecoin) - BINANCE:DOGEUSDT
10. TRX (Tron) - BINANCE:TRXUSDT
Additional slots (11-15) are available for custom asset selection.
---
This indicator is particularly useful for cryptocurrency traders seeking to understand market breadth and identify opportunities where specific assets are diverging from overall market sentiment.
天干地支标注(当前视窗范围 + 居中标签)🇨🇳 中文说明
天干地支标注(自动匹配周期)
本指标会根据图表的时间周期(年、月、日、小时、分钟)自动计算并在每根 K 线上方显示对应的天干地支。
• 自动识别图表周期(年/月/日/时/分)
• 仅显示当前视窗内的柱子,性能高、不卡顿
• 可自定义每隔 N 根显示一次(默认每根)
• 支持居中矩形标签(label.style_label_center),清晰易读
• 无需区分暗黑/亮色主题,自动兼容所有图表样式
可作为金融时间序列与中国传统历法(干支纪时)结合的参考工具,
在时间周期研究、风水、气运周期、江恩时间分析等领域有辅助价值。
⸻
🇬🇧 English Description (for international visibility)
Heavenly Stems & Earthly Branches Marker (Auto-Adaptive Version)
This indicator automatically calculates and displays the corresponding Chinese Heavenly Stems and Earthly Branches (Ganzhi) for each candlestick, based on the chart’s timeframe (Year, Month, Day, Hour, or Minute).
• Auto-detects chart timeframe
• Draws only within the current visible window (optimized performance)
• Adjustable display interval (e.g., show every N bars)
• Uses centered label style for clarity
• Compatible with both dark and light themes
Useful for combining Chinese calendar cycles with financial time analysis, time-cycle studies, or Gann-style timing models.
ICOptimizerLibrary "ICOptimizer"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
IC optimiser libLibrary "IC optimiser lib"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
ATR %ATR % Oscillator
A simple and effective Average True Range (ATR) indicator displayed as a percentage of the current price in a separate panel.
FEATURES:
• ATR displayed as percentage of current price for easy cross-asset comparison
• EMA smoothing line using the same period as ATR
• Configurable ATR period (default: 20)
• Clean visualization with zero reference line
HOW IT WORKS:
The indicator calculates ATR and converts it to a percentage: (ATR / Close) × 100
This normalization allows you to:
- Compare volatility across different instruments regardless of price
- Identify high and low volatility periods
- Use the EMA line to spot volatility trends
PARAMETERS:
ATR Period - The lookback period for ATR calculation (default: 20)
Timeframe - Choose any timeframe for ATR calculation independently from the chart timeframe (default: chart timeframe)
Kelly Wave Position Matrix 20251024 V1 ZENYOUNGA simple table is designed for use when opening a position. It applies the Kelly formula to calculate a more scientific position size based on win rate and risk–reward ratio. At the same time, it displays 1.65× ATR stop-loss levels for both long and short positions to serve as a reference for comparing with existing stop-loss placements.
Additionally, the table back-calculates the corresponding position size based on a 2% total capital loss limit, using the actual loss ratio. It also shows the current wave trend status as a pre-filtering condition.
Overall, this table integrates the core elements of trading — trend (wave confirmation), win rate, risk–reward ratio, and position sizing — making it an effective checklist before entering a trade. Its purpose is to help achieve a probabilistic edge and ensure positive expected value in trading decisions.






















