อินดิเคเตอร์และกลยุทธ์
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
EMA Crossover + Angle + Candle Pattern + Breakout (Clean) finalmayank raj 9 15 ema strategy which will give me 1 crore
Visible RangeOverview This is a precision tool designed for quantitative traders and engineers who need exact control over their chart's visual scope. Unlike standard time calculations that fail in markets with trading breaks (like A-Shares, Futures, or Stocks), this indicator uses a loop-back mechanism to count the actual number of visible bars, ensuring your indicators (e.g., MA60, MA200) have sufficient sample data.
Why use this? If you use multi-timeframe layouts (e.g., Daily/Hourly/15s), it is critical to know exactly how much data is visible.
The Problem: In markets like the Chinese A-Share market (T+1, 4-hour trading day), calculating Time Range / Timeframe results in massive errors because it includes closed market hours (lunch breaks, nights, weekends).
The Solution: This script iterates through the visible range to count the true bar_index, providing 100% accurate data density metrics.
Key Features
True Bar Counting: Uses a for loop to count actual candles, ignoring market breaks. perfect for non-24/7 markets.
Integer Precision: Displays time ranges (Days, Hours, Mins, Secs) in clean integers. No messy decimals.
Compact UI: Displays information in a single line (e.g., View: 30 Days (120 Bars)), default to the Top Right corner to save screen space.
Fully Customizable: Adjustable position, text size, and colors to fit any dark/light theme.
Performance Optimized: Includes max_bars_back limits to prevent browser lag on deep history lookups.
Settings
Position: Default Top Right (can be moved to any corner).
Max Bar Count: Default 5000 (Safety limit for loop calculation).
Santhosh Time Block HighlighterI have created an indicator to differentiate market trend/momentum in different time zone during trading day. This will help us to understand the market pattern to avoid entering trade during consolidation/distribution. Its helps to measure the volatility and market sentiment
Moving Average Exponential 21 & 55 CloudTake the trade after price goes into the cloud and comes back.
Shock Wave EMA Ribbon with adjustable time period9 ema and 21 ema script, with background plot. All colors, and settings toggle on and off. Simple but effective. This one has selectable time periods so the ribbon can stay fixed on your desired time scale.
VIX vs VIX1Y SpreadSpread Calculation: Shows VIX1Y minus VIX
Positive = longer-term vol higher (normal contango)
Negative = near-term vol elevated (inverted term structure)
Can help identify longer term risk pricing of equity assets.
Sector Rotation - Risk Preference Indicator# Sector Rotation - Risk Preference Indicator
## Overview
This indicator measures market risk appetite by comparing the relative strength between **Aggressive** and **Defensive** sectors. It provides a clean, single-line visualization to help traders identify market sentiment shifts and potential trend reversals.
## How It Works
The indicator calculates a **Bullish/Bearish Ratio** by dividing the average price of aggressive sector ETFs by defensive sector ETFs, then normalizing to a baseline of 100.
**Formula:**
- Ratio = (Aggressive Sectors Average / Defensive Sectors Average) × 100
**Interpretation:**
- **Ratio > 100**: Risk-on sentiment (Aggressive sectors outperforming Defensive)
- **Ratio < 100**: Risk-off sentiment (Defensive sectors outperforming Aggressive)
- **Ratio ≈ 100**: Neutral (Both sector groups performing equally)
## Default Sectors
**Defensive Sectors** (Safe havens during uncertainty):
- XLP - Consumer Staples Select Sector SPDR Fund
- XLU - Utilities Select Sector SPDR Fund
- XLV - Health Care Select Sector SPDR Fund
**Aggressive Sectors** (Growth-oriented, higher risk):
- XLK - Technology Select Sector SPDR Fund
- XBI - SPDR S&P Biotech ETF
- XRT - SPDR S&P Retail ETF
## Features
✅ **Fully Customizable Sectors** - Choose any ETFs/tickers for each sector group
✅ **Smoothing Control** - Adjustable SMA period to reduce noise (default: 2)
✅ **Clean Visualization** - Single blue line for easy interpretation
✅ **Multi-timeframe Support** - Works on any timeframe
✅ **Lightweight** - Minimal calculations for fast performance
## Settings
### Defensive Sectors Group
- **Defensive Sector 1**: First defensive ETF ticker (default: XLP)
- **Defensive Sector 2**: Second defensive ETF ticker (default: XLU)
- **Defensive Sector 3**: Third defensive ETF ticker (default: XLV)
### Aggressive Sectors Group
- **Aggressive Sector 1**: First aggressive ETF ticker (default: XLK)
- **Aggressive Sector 2**: Second aggressive ETF ticker (default: XBI)
- **Aggressive Sector 3**: Third aggressive ETF ticker (default: XRT)
### Display Settings
- **Smoothing Length**: SMA period for ratio smoothing (default: 2, range: 1-50)
- Lower values = More responsive but noisier
- Higher values = Smoother but more lagging
## Use Cases
### 1. Market Regime Identification
- **Rising Ratio (trending up)** → Bull market / Risk-on environment
- Aggressive sectors leading, investors chasing growth
- Favorable for long positions in tech, growth stocks
- **Falling Ratio (trending down)** → Bear market / Risk-off environment
- Defensive sectors leading, investors seeking safety
- Consider defensive positioning or short opportunities
### 2. Divergence Analysis
- **Bullish Divergence**: Price makes new lows but ratio rises
- Suggests underlying strength returning
- Potential market bottom forming
- **Bearish Divergence**: Price makes new highs but ratio falls
- Suggests weakening momentum
- Potential market top forming
### 3. Trend Confirmation
- **Strong uptrend + Rising ratio** → Confirmed bullish trend
- **Strong downtrend + Falling ratio** → Confirmed bearish trend
- **Uptrend + Falling ratio** → Weakening trend, watch for reversal
- **Downtrend + Rising ratio** → Potential trend exhaustion
## Best Practices
⚠️ **Timeframe Selection**
- Recommended: Daily, 4H, 1H for cleaner signals
- Lower timeframes (15m, 5m) may produce noisy signals
⚠️ **Complementary Analysis**
- Use alongside price action and volume analysis
- Combine with support/resistance levels
- Not designed as a standalone trading system
⚠️ **Market Conditions**
- Most effective in trending markets
- Less reliable during ranging/consolidation periods
- Works best in liquid, well-traded sectors
⚠️ **Customization Tips**
- Can substitute with international sectors (EWU, EWZ, etc.)
- Can use crypto sectors (DeFi vs Layer1, etc.)
- Adjust smoothing based on trading style (day trading = 2-5, swing = 10-20)
## Display Options
### Default View (overlay=false)
- Shows in separate pane below chart
- Dedicated scale for ratio values
### Alternative View
- Can be moved to main chart pane (drag indicator)
I typically overlay this indicator on the SPY daily chart to observe divergences. I don’t focus on specific values but rather on the direction of the trend.
The author is not responsible for any trading losses incurred using this indicator.
## Support & Feedback
For questions, feature requests, or bug reports:
- Comment below
- Send a private message
- Check for updates regularly
If you find this indicator useful, please:
- ⭐ Leave a like/favorite
- 💬 Share your experience in comments
- 📊 Share charts showing interesting patterns
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
Hurst Exponent - Detrended Fluctuation AnalysisIn stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analyzing time series that appear to be long-memory processes and noise.
█ OVERVIEW
We have introduced the concept of Hurst Exponent in our previous open indicator Hurst Exponent (Simple). It is an indicator that measures market state from autocorrelation. However, we apply a more advanced and accurate way to calculate Hurst Exponent rather than simple approximation. Therefore, we recommend using this version of Hurst Exponent over our previous publication going forward. The method we used here is called detrended fluctuation analysis. (For folks that are not interested in the math behind the calculation, feel free to skip to "features" and "how to use" section. However, it is recommended that you read it all to gain a better understanding of the mathematical reasoning).
█ Detrend Fluctuation Analysis
Detrended Fluctuation Analysis was first introduced by by Peng, C.K. (Original Paper) in order to measure the long-range power-law correlations in DNA sequences . DFA measures the scaling-behavior of the second moment-fluctuations, the scaling exponent is a generalization of Hurst exponent.
The traditional way of measuring Hurst exponent is the rescaled range method. However DFA provides the following benefits over the traditional rescaled range method (RS) method:
• Can be applied to non-stationary time series. While asset returns are generally stationary, DFA can measure Hurst more accurately in the instances where they are non-stationary.
• According the the asymptotic distribution value of DFA and RS, the latter usually overestimates Hurst exponent (even after Anis- Llyod correction) resulting in the expected value of RS Hurst being close to 0.54, instead of the 0.5 that it should be. Therefore it's harder to determine the autocorrelation based on the expected value. The expected value is significantly closer to 0.5 making that threshold much more useful, using the DFA method on the Hurst Exponent (HE).
• Lastly, DFA requires lower sample size relative to the RS method. While the RS method generally requires thousands of observations to reduce the variance of HE, DFA only needs a sample size greater than a hundred to accomplish the above mentioned.
█ Calculation
DFA is a modified root-mean-squares (RMS) analysis of a random walk. In short, DFA computes the RMS error of linear fits over progressively larger bins (non-overlapped “boxes” of similar size) of an integrated time series.
Our signal time series is the log returns. First we subtract the mean from the log return to calculate the demeaned returns. Then, we calculate the cumulative sum of demeaned returns resulting in the cumulative sum being mean centered and we can use the DFA method on this. The subtraction of the mean eliminates the “global trend” of the signal. The advantage of applying scaling analysis to the signal profile instead of the signal, allows the original signal to be non-stationary when needed. (For example, this process converts an i.i.d. white noise process into a random walk.)
We slice the cumulative sum into windows of equal space and run linear regression on each window to measure the linear trend. After we conduct each linear regression. We detrend the series by deducting the linear regression line from the cumulative sum in each windows. The fluctuation is the difference between cumulative sum and regression.
We use different windows sizes on the same cumulative sum series. The window sizes scales are log spaced. Eg: powers of 2, 2,4,8,16... This is where the scale free measurements come in, how we measure the fractal nature and self similarity of the time series, as well as how the well smaller scale represent the larger scale.
As the window size decreases, we uses more regression lines to measure the trend. Therefore, the fitness of regression should be better with smaller fluctuation. It allows one to zoom into the “picture” to see the details. The linear regression is like rulers. If you use more rulers to measure the smaller scale details you will get a more precise measurement.
The exponent we are measuring here is to determine the relationship between the window size and fitness of regression (the rate of change). The more complex the time series are the more it will depend on decreasing window sizes (using more linear regression lines to measure). The less complex or the more trend in the time series, it will depend less. The fitness is calculated by the average of root mean square errors (RMS) of regression from each window.
Root mean Square error is calculated by square root of the sum of the difference between cumulative sum and regression. The following chart displays average RMS of different window sizes. As the chart shows, values for smaller window sizes shows more details due to higher complexity of measurements.
The last step is to measure the exponent. In order to measure the power law exponent. We measure the slope on the log-log plot chart. The x axis is the log of the size of windows, the y axis is the log of the average RMS. We run a linear regression through the plotted points. The slope of regression is the exponent. It's easy to see the relationship between RMS and window size on the chart. Larger RMS equals less fitness of the regression. We know the RMS will increase (fitness will decrease) as we increases window size (use less regressions to measure), we focus on the rate of RMS increasing (how fast) as window size increases.
If the slope is < 0.5, It means the rate of of increase in RMS is small when window size increases. Therefore the fit is much better when it's measured by a large number of linear regression lines. So the series is more complex. (Mean reversion, negative autocorrelation).
If the slope is > 0.5, It means the rate of increase in RMS is larger when window sizes increases. Therefore even when window size is large, the larger trend can be measured well by a small number of regression lines. Therefore the series has a trend with positive autocorrelation.
If the slope = 0.5, It means the series follows a random walk.
█ FEATURES
• Sample Size is the lookback period for calculation. Even though DFA requires a lower sample size than RS, a sample size larger > 50 is recommended for accurate measurement.
• When a larger sample size is used (for example = 1000 lookback length), the loading speed may be slower due to a longer calculation. Date Range is used to limit numbers of historical calculation bars. When loading speed is too slow, change the data range "all" into numbers of weeks/days/hours to reduce loading time. (Credit to allanster)
• “show filter” option applies a smoothing moving average to smooth the exponent.
• Log scale is my work around for dynamic log space scaling. Traditionally the smallest log space for bars is power of 2. It requires at least 10 points for an accurate regression, resulting in the minimum lookback to be 1024. I made some changes to round the fractional log space into integer bars requiring the said log space to be less than 2.
• For a more accurate calculation a larger "Base Scale" and "Max Scale" should be selected. However, when the sample size is small, a larger value would cause issues. Therefore, a general rule to be followed is: A larger "Base Scale" and "Max Scale" should be selected for a larger the sample size. It is recommended for the user to try and choose a larger scale if increasing the value doesn't cause issues.
The following chart shows the change in value using various scales. As shown, sometimes increasing the value makes the value itself messy and overshoot.
When using the lowest scale (4,2), the value seems stable. When we increase the scale to (8,2), the value is still alright. However, when we increase it to (8,4), it begins to look messy. And when we increase it to (16,4), it starts overshooting. Therefore, (8,2) seems to be optimal for our use.
█ How to Use
Similar to Hurst Exponent (Simple). 0.5 is a level for determine long term memory.
• In the efficient market hypothesis, market follows a random walk and Hurst exponent should be 0.5. When Hurst Exponent is significantly different from 0.5, the market is inefficient.
• When Hurst Exponent is > 0.5. Positive Autocorrelation. Market is Trending. Positive returns tend to be followed by positive returns and vice versa.
• Hurst Exponent is < 0.5. Negative Autocorrelation. Market is Mean reverting. Positive returns trends to follow by negative return and vice versa.
However, we can't really tell if the Hurst exponent value is generated by random chance by only looking at the 0.5 level. Even if we measure a pure random walk, the Hurst Exponent will never be exactly 0.5, it will be close like 0.506 but not equal to 0.5. That's why we need a level to tell us if Hurst Exponent is significant.
So we also computed the 95% confidence interval according to Monte Carlo simulation. The confidence level adjusts itself by sample size. When Hurst Exponent is above the top or below the bottom confidence level, the value of Hurst exponent has statistical significance. The efficient market hypothesis is rejected and market has significant inefficiency.
The state of market is painted in different color as the following chart shows. The users can also tell the state from the table displayed on the right.
An important point is that Hurst Value only represents the market state according to the past value measurement. Which means it only tells you the market state now and in the past. If Hurst Exponent on sample size 100 shows significant trend, it means according to the past 100 bars, the market is trending significantly. It doesn't mean the market will continue to trend. It's not forecasting market state in the future.
However, this is also another way to use it. The market is not always random and it is not always inefficient, the state switches around from time to time. But there's one pattern, when the market stays inefficient for too long, the market participants see this and will try to take advantage of it. Therefore, the inefficiency will be traded away. That's why Hurst exponent won't stay in significant trend or mean reversion too long. When it's significant the market participants see that as well and the market adjusts itself back to normal.
The Hurst Exponent can be used as a mean reverting oscillator itself. In a liquid market, the value tends to return back inside the confidence interval after significant moves(In smaller markets, it could stay inefficient for a long time). So when Hurst Exponent shows significant values, the market has just entered significant trend or mean reversion state. However, when it stays outside of confidence interval for too long, it would suggest the market might be closer to the end of trend or mean reversion instead.
Larger sample size makes the Hurst Exponent Statistics more reliable. Therefore, if the user want to know if long term memory exist in general on the selected ticker, they can use a large sample size and maximize the log scale. Eg: 1024 sample size, scale (16,4).
Following Chart is Bitcoin on Daily timeframe with 1024 lookback. It suggests the market for bitcoin tends to have long term memory in general. It generally has significant trend and is more inefficient at it's early stage.
FVG + Bollinger + Toggles + Swing H&L (Taken/Close modes)This indicator combines multiple advanced market-structure tools into one unified system.
It detects A–C Fair Value Gaps (FVG) and plots them as dynamic boxes projected a fixed number of bars forward.
Each bullish or bearish FVG updates in real time and “closes” once price breaks through the opposite boundary.
The indicator also includes Bollinger Bands based on EMA-50 with adjustable deviation settings for volatility context.
Swing Highs and Swing Lows are identified using pivot logic and are drawn as dynamic lines that change color once taken out.
You can choose whether swings end on a close break or on any touch/violation of the level.
All visual elements—FVGs, Bollinger Bands, and Swing Lines—can be individually toggled on or off from the settings panel.
A time-window session box is included, allowing you to highlight a custom intraday window based on your selected timezone.
The session box automatically tracks the high and low of the window and locks the final range once the window closes.
Overall, the tool is designed for traders who want a structured, multi-layered view of liquidity, volatility, and intraday timing.
NQUSB Sector Industry Stocks Strength
A Comprehensive Multi-Industry Performance Comparison Tool
The complete Pine Script code and supporting Python automation scripts are available on GitHub:
GitHub Repository: github.com
Original idea from by www.tradingview.com
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ WHAT'S NEW ═══
4-Level Hierarchical Navigation:
Primary: All 11 NQUSB sectors (NQUSB10, NQUSB15, NQUSB20, etc.)
Secondary (Default): Broad sectors like Technology, Energy
Tertiary: Industry groups within sectors
Quaternary: Individual stocks within industries (37 semiconductors)
Enhanced Stock Coverage:
1,176 total stocks across 129 industries
37 semiconductor stocks
Market-cap weighted selection: 60% tech / 35% others
Range: 1-37 stocks per industry
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ CORE FEATURES ═══
1. Drill-Down/Drill-Up Navigation
View NVDA at different granularity levels:
Quaternary: ● NVDA ranks #3 of 37 semiconductors
Tertiary: ✓ Semiconductors at 85% (strongest in tech hardware)
Secondary: ✓ Tech Hardware at 82% (stronger than software)
Primary: ✓ Technology at 78% (#1 sector overall)
Insight: One indicator, one stock, four perspectives - instantly see if strength is stock-specific, industry-specific, or sector-wide.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
2. Visual Current Stock Identification
Violet Markers - Instant Recognition:
● (dot) marker when current stock is in top N performers
✕ (cross) marker when current stock is below top N
Violet color (#9C27B0) on both symbol and value labels
Example: "NVDA ● ranks #3 of 37"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
3. Rank Display in Title
Dynamic title shows performance context:
"Semiconductors (RS Rating - 3 Months) | NVDA ranks #3 of 37"
#1 = Best performer, higher number = lower rank
Total adjusts if current stock auto-added
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
4. Auto-Add Current Stock
Always Included:
Current stock automatically added if not in predefined list
Example: Viewing PRSO → "PRSO ranks #37 of 39 ✕"
Works for any stock - from NVDA to obscure small-caps
Violet markers ensure visibility even when ranked low
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ DUAL PERFORMANCE METRICS ═══
RS Rating (Relative Strength):
Normalized strength score 1-99
Compare stocks across different price ranges
Default benchmark: SPX
% Return:
Simple percentage price change
Direct performance comparison
11 Time Periods:
1 Week, 2 Weeks, 1 Month, 2 Months, 3 Months (Default) , 6 Months, 1 Year, YTD, MTD, QTD, Custom (1-500 days)
Result: 22 analytical combinations (2 metrics × 11 periods)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ USE CASES ═══
Sector Rotation Analysis:
Is NVDA's strength semiconductors-specific or tech-wide?
Drill through all 4 levels to find answer
Identify which industry groups are leading/lagging
Finding Hidden Gems:
JPM ranks #3 of 13 in Major Banks
But Financials sector weak overall (68%)
= Relative strength play in weak sector
Cross-Industry Comparison:
129 industries covered
Market-wide scan capability
Find strongest performers across all sectors
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ TECHNICAL SPECIFICATIONS ═══
V32 Stats:
Total Industries: 129
Total Stocks: 1,176
File Size: 82,032 bytes (80.1 KB)
Request Limit: 39 max (Semiconductors), 10-16 typical
Granularity Levels: 4 (Primary → Quaternary)
Smart Stock Allocation:
Technology industries: 60% coverage
Other industries: 35% coverage
Market-cap weighted selection
Formula: MIN(39, MAX(5, CEILING(total × percentage)))
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
═══ KEY ADVANTAGES ═══
vs. Single Industry Tools:
✓ 129 industries vs 1
✓ Market-wide perspective
✓ Hierarchical navigation
✓ Sector rotation detection
vs. Manual Comparison:
✓ No ETF research needed
✓ Instant visual markers
✓ Automatic ranking
✓ One-click drill-down
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
For complete documentation, Python automation scripts, and CSV data files:
github.com
Version: V32
Last Updated: 2025-11-30
Pine Script Version: v5
Current Candle Vertical LineDescription
The Current Candle Vertical Line indicator draws a fully customizable vertical line on the most recent candle (live bar). This provides a clear visual anchor for active traders, especially during fast-moving markets or multi-chart setups.
The line extends from the top of the chart to the bottom, ensuring maximum visibility—regardless of zoom level or price scale.
Features
✔ Fully customizable line color
✔ Adjustable opacity (0–100%)
✔ Custom line thickness
✔ Three selectable line styles: Solid, Dashed, or Dotted
✔ Automatically deletes old line and redraws on the newest bar
✔ Works on any timeframe, chart type, and asset
Use Cases
Highlight the current candle during live trading
Keep visual focus when scalping or trading futures
Align entries with indicators on lower or higher timeframes
Improve visibility during high volatility
Support multi-monitor or multi-chart layouts
Notes
The indicator draws the line only on the last active bar.
Since overlay=true, the line appears in the main chart panel.
This script does not generate alerts (visual marker only).
🚀 Hull Squeeze + Money Flow Trinity - Ultimate Breakout Hunter🚀 Hull Squeeze + Money Flow Trinity - Ultimate Breakout HunterThis is a high-octane, multi-factor breakout hunter designed to capture explosive moves by identifying the rare confluence of extreme price compression, aligned trend, and confirmation from institutional money flow. It combines three best-in-class market analysis tools into a single, comprehensive signaling system.The indicator is engineered to filter out noisy, low-probability setups, focusing instead on high-conviction events like "MEGA SQUEEZE FIRE" and the elusive "GOD MODE SETUP".How the Trinity Works:📊 Hull Ribbon & Compression: Uses a ribbon of Hull Moving Averages (HMAs) to filter the underlying trend and, crucially, measure the compression of volatility relative to ATR. When the ribbon is highly compressed, it signals the market is coiled and ready for a major move—a Pre-Squeeze warning.💥 Squeeze Detection: Implements the classic Bollinger Band (BB) / Keltner Channel (KC) Squeeze logic to pinpoint the exact moment volatility is drained (Squeeze ON) and the moment the resulting energy is released (Squeeze FIRE).💰 Money Flow Trinity: Confirms the quality of the move by aggregating three volume-based indicators—Force Index, Chaikin Money Flow (CMF), and Accumulation/Distribution (A/D) Line. This generates a Money Flow Score ($\le 3$) that validates the directional pressure, ensuring the breakout is backed by genuine buying or selling.The Ultimate Edge:The indicator plots actionable signals directly on the chart and provides a real-time Dashboard displaying the status of each component and the final Signal Status. Use it to spot low-risk, high-reward opportunities on your favorite instruments.
Dashboard AIO Pro: RSI, MACD & Stoch RSI [THF]Description:
This indicator provides a comprehensive "All-in-One" Dashboard that monitors three major momentum oscillators: RSI, MACD, and Stochastic RSI. It displays their real-time values and interprets their signals (Buy/Sell/Neutral) in a clean, customizable table directly on your chart.
Key Features:
Consolidated View: Instead of cluttering your chart with three separate indicator panes, this dashboard summarizes the market state in one compact table.
Dynamic Summary: The script calculates an "Overall Trend" based on a voting system. If 2 or more indicators agree on a direction, the summary updates to show a "Strong Trend".
Fully Customizable Colors: Users can customize the colors for Strong Buy, Buy, Sell, Strong Sell, and Neutral states via the settings menu to match their chart theme.
Alerts Included: Built-in alert conditions for "Strong Buy Consensus" and "Strong Sell Consensus".
How it Works (The Logic):
RSI (14):
Value > 70: Considered Overbought (Bearish signal).
Value < 30: Considered Oversold (Bullish signal).
MACD (12, 26, 9):
Bullish: MACD Line > Signal Line AND Histogram is rising.
Bearish: MACD Line < Signal Line AND Histogram is falling.
Stoch RSI (14, 14, 3, 3):
Evaluates K% line position relative to 80/20 levels and crossovers with D% line.
Overall Summary:
The script assigns a score (+1 for Bullish, 0 for Neutral).
If the total score >= 2, the trend is identified as "Uptrend".
If the indicators show divergent signals, the status remains "Ranging".
Settings:
You can change the length of all indicators (RSI, MACD, Stoch).
You can change the table position and text size.
Color Customization: Dedicated section to change the dashboard colors.
Easy Crypto Signal FREE🆓 FREE Bitcoin & Crypto Trading Indicator
Easy Crypto Signal FREE helps you make better trading decisions with real-time BUY/SELL signals based on multiple technical indicators.
✅ What you get in FREE version:
• Real-time BUY/SELL signals (green/red arrows)
• Trading SCORE (0-100%) - market strength indicator
• Works on BTC, ETH, and all major altcoins
• Optimized for 4h timeframe (works on all timeframes)
• Simple visual interface
• Basic alert system
📊 How it works:
The indicator combines RSI, MACD, EMA trends, and volume analysis to generate a composite SCORE (0-100%).
• SCORE > 65% = BUY signal 🟢
• SCORE < 35% = SELL signal 🔴
• SCORE 35-65% = WAIT (neutral zone) 🟡
⚠️ FREE Version Limitations:
• No detailed RSI values
• No MACD trend details
• No trend strength indicators
• Fixed sensitivity (65%)
• Limited customization
💎 Want the FULL PRO version?
🚀 PRO includes:
• Full RSI + MACD + Trend analysis displayed
• Customizable sensitivity (40-80%)
• Advanced alert customization
• Professional clean interface
• Volume strength indicator
• NO watermarks
• Premium support
📊 Proven Backtest Results:
• 57.1% Win Rate
• 3.36 Profit Factor (Excellent)
• +9.55% return in 3 months
• Only -2.69% Max Drawdown (Low Risk)
🔗 Get PRO version:
📈 Best practices:
1. Use on 4h timeframe for best results
2. Combine with your own analysis
3. Always set Stop Loss (5-10%)
4. Test on demo account first
5. Don't trade based on signals alone
⚠️ Risk Disclaimer:
Cryptocurrency trading involves substantial risk. This indicator is for educational purposes only and does not guarantee profits. Past performance does not indicate future results. Always do your own research and never invest more than you can afford to lose.
📧 Questions or Feedback?
Comment below or message me directly!
🌟 If you find this helpful, please give it a like and share!
v1.0 - Initial FREE release
• Basic BUY/SELL signal system
• Score indicator 0-100%
• Optimized for 4h timeframe
• Works on all crypto pairs
TMT Sessions - Hitesh_NimjeTMT Session - HiteshNimje
Overview
This indicator highlights four configurable trading sessions (default: New York / London / Tokyo / Sydney) and draws session ranges, session VWAPs, session mean/trendline, max/min lines and optional dashboard info. It was built for students of Thought Magic Trading (TMT) to quickly visualize intraday structure across major sessions.
Key features
4 separate sessions (A/B/C/D) — customizable names, times and colors.
Session Range boxes (high/low), optional outline and labels.
VWAP per session (volume-weighted average price).
Mean / Trendline for session price (optional).
Optional session Max/Min lines.
Small on-chart descriptive labels explaining what each plotted line means.
Simple dashboard showing session status (Active/Inactive), volume, trend strength and standard deviation (optional).
Timezone offset or use exchange timezone.
Default colors
Session A — Blue
Session B — Black
Session C — Red
Session D — Orange
Usage / Notes
Designed for intraday analysis — works best on intraday timeframes.
Toggle any session, overlay, or label via input settings to reduce chart clutter.
Labels and dashboard are optional; enable them only when you want the additional on-chart information.
The indicator does not provide buy/sell signals. Use it as a structural reference in conjunction with your trading plan.
Access & License
EXCLUSIVE ACCESS: This indicator is for TMT students only.
Distribution: Invite-only (author permission required) — the author will grant access by invitation.
Redistribution, modification, or public reposting without permission is prohibited.
Support / Contact
For access requests or issues, contact the author: Hitesh_Nimje (Thought Magic Trading).
(Provide invite requests directly to the author — do not attempt to share copies.)
Disclaimer
For educational purposes only. Trading involves risk. Past performance is not indicative of future results. The author is not responsible for trading losses.
Enhanced Ichimoku CloudDYNAMIC INDICATOR... im a beginer at this so i like to enhance my indicator by adding Visual Elements so that its easier to read for me... here is a visual representation of trend changes.
FRAN CRASH PLAY RULESA script with purely descriptive nature is one that:
• Only describes actions, settings, characters, and events.
• Contains no dialogue, commands, or instructions for execution.
• Does not specify plot decisions, logic, or interactive elements.
• Reads like a detailed narrative blueprint, focusing on what exists or happens rather than what anyone should do.
VB Finviz-style MTF Screener📊 VB Multi-Timeframe Stock Screener (Daily + 4H + 1H)
A structured, high-signal stock screener that blends Daily fundamentals, 4H trend confirmation, and 1H entry timing to surface strong trading opportunities with institutional discipline.
🟦 1. Daily Screener — Core Stock Selection
All fundamental and structural filters run strictly on Daily data for maximum stability and signal quality.
Daily filters include:
📈 Average Volume & Relative Volume
💲 Minimum Price Threshold
📊 Beta vs SPY
🏢 Market Cap (Billions)
🔥 ATR Liquidity Filter
🧱 Float Requirements
📘 Price Above Daily SMA50
🚀 Minimum Gap-Up Condition
This layer acts like a Finviz-style engine, identifying stocks worth trading before momentum or timing is considered.
🟩 2. 4H Trend Confirmation — Momentum Check
Once a stock passes the Daily screen, the 4-hour timeframe validates trend strength:
🔼 Price above 4H MA
📈 MA pointing upward
This removes structurally good stocks that are not in a healthy trend.
🟧 3. 1H Entry Alignment — Timing Layer
The Hourly timeframe refines near-term timing:
🔼 Price above 1H MA
📉 Short-term upward movement detected
This step ensures the stock isn’t just good on paper—it’s moving now.
🧪 MTF Debug Table (Your Transparency Engine)
A live diagnostic table shows:
All Daily values
All 4H checks
All 1H checks
Exact PASS/FAIL per condition
Perfect for tuning thresholds or understanding why a ticker qualifies or fails.
🎯 Who This Screener Is For
Swing traders
Momentum/trend traders
Systematic and rules-based traders
Traders who want clean, multi-timeframe alignment
By combining Daily fundamentals, 4H trend structure, and 1H momentum, this screener filters the market down to the stocks that are strong, aligned, and ready.






















