Help with SMA Crossover Demo scriptHi I'm currently in the process of learning to write a script. Here's a very basic SMA 34/4 crossover script. Is somebody able to help me with adding the following functions to the script.
1. Add an alert and indicator to close a short or long trade whenever any candle touches the SMA 34 line?
2. When a SMA 34/4 Crossover has been executed (a Short Trade condition) add an alert/indicator (Titled “Add”) every time a Green bullish candle has closed.
3. When a SMA 34/4 Crossunder has been executed (a Long Trade condition) add an alert/indicator (Titled “Add) every time a Red bearish candle has closed.
4. To used on 15m/30m/1hr/2hr/4hr/1D/1W timeframe charts?
ค้นหาในสคริปต์สำหรับ "ha溢价率"
SMA Crossover demoHi I'm currently in the process of learning to write a script. Here's a very basic SMA 34/5 crossover script. Is somebody able to help me with adding the following functions to the script.
1. Add an alert and indicator to close a short or long trade whenever any candle touches the SMA 34 line?
2. When a SMA 34/5 Crossover has been executed (a Short Trade condition) add an alert/indicator (Titled “Add”) every time a Green bullish candle has closed.
3. When a SMA 34/5 Crossunder has been executed (a Long Trade condition) add an alert/indicator (Titled “Add) every time a Red bearish candle has closed.
4. To used on 15m/30m/1hr/2hr/4hr/1D/1W timeframe charts?
Katana Gaps Bounty Hunter Pro (Show Gaps of All Types) by RRBKatana Gaps Bounty Hunter Pro (KGB Hunter Pro, Gap Exterminator) by RagingRocketBull 2018
Version 1.0
This indicator shows/counts/filters gaps on a chart.
There are several versions: Simple, Pro, Advanced and Zones. This is the Pro version. The Differences are listed below.
- Simple: shows/counts gaps, changes color based on gap dir (2 colors), filters out price gaps within session, large gaps, and high volume gaps
- Pro: +shows all types of gaps, multi color, pro filters (full/partial/overlapping time, price, large, candle, volume, doji, weekend gaps within delta ranges)
- Advanced: +session times mask, show/count gaps only for last N bars, +min/max/filled gaps stats, dark mode
- Zones: +shows gaps as dynamic horiz zones
KGB Hunter Pro Gap Exterminator focuses on showing you all possible types of gaps in multiple colors. Gap theory states that price tends to return and fill the gaps,
so you can use it to collect the bounty. You can apply any combination of complex filters to narrow down search results i.e., find only all:
- type 3 gaps up with allowed wick-candle overlapping of up to 10% and
- gap size larger than 200 and
- with at least one of the candles larger than 100 and
- volume change at least 40 and
- spanning less than 2 bar periods and
- excluding weekend gaps
Features:
- highlights gaps using barcolor and plotchar chars (8 colors x 2 dirs)
- supports all 3 types of gap overlapping: full gap (no overlapping), wick-wick and wick-body overlapping up to a specified % of candle body
- finds all types of gaps with pro filters for price, time, large, volume, timerange, candle size, doji gaps
- individual show/hide flags for each gap/char based on gap type
- can show/hide gaps/chars based on gap dir
- changes color of gaps/chars based on gap dir/type, multi color gap type combos
- displays chars above/below bar based on gap dir
- can show/hide weekend gaps
- counts all filtered gaps
Colors:
Basically There are 2 gap types (Price, Time) x 2 directions (Up, Down) x 2 modifiers (Large, Volume), Volume Gap is a separate class with its own modifiers, so more accurately:
- (Price, Time) x 2 directions (Up, Down) x Large modifier
- (Price Volume, Time Volume) x 2 directions (Up, Down) x Large modifier
using a total of 16+1 colors or 8+1 base colors + transparency modifier
depending on settings you can highlight gaps using any multi color combo from just 1 to all 16 colors (+1 gray color for weekends).
basic gap = 1 base color with normal transparency
price,time = 2 base colors (including basic gap) with normal transparency (+1 color)
* up,down dir = +2 new base colors with normal transparency (including 2 base colors), with a total of 2*2 = 4 price/time base colors (+2 colors)
* large = same 4 base colors with vivid transparency modifier (+4 colors)
* volume = +2 new base colors with normal transparency, a separate class (+2 colors)
* volume * up,down dir = +another 2 new base colors with normal transparency (including 2 volume base colors), with a total of 2*2 = 4 volume base colors (+2 colors)
* volume * large = 4 volume base colors with vivid transparency modifier (+4 colors)
weekend_gap = gray (+1 color)
doji gap, candle gap, timerange gap = no special color, inherits color from parent gap type
for more details, please see the Gap Color Hierarchy comments in code
_________________________________________________________________________
You can find the following gap related terminology in literature: full, partial, extreme, breakaway, runaway/continuation, common, exhaustion gaps.
There are no exact rules to distinguish between them, so this can't be implemented.
When defining a gap it all boils down to how do you plot a gap, which points between adjacent candles do you consider a gap. Different sources apply different methodology
but in practice only 3 types of gap overlapping can exist:
- full gap (no overlapping),
- partial (wick-wick overlapping) and
- extreme partial (wick-body overlapping up to a specified % of a candle body)
All these types are supported in this script. The only possible remaining option is candle-candle overlapping which is not a gap by definition.
Many other script specific subtypes are also supported. Please see description of each gap type below and comments in code.
General display modes
- gap has 3 possible overlapping modes: full gap (no overlapping), wick-wick overlapping, wick-candle overlapping up to a specified % of candle body size (for mode 3 only)
the remaining candle-candle overlapping implies not a gap by definition
full gap mode will find the least amount of gaps, wick-candle - the most
- gap can be either price or time, up or down, and shown above or below the candles (gap chars)
- by definition, a price gap is a smaller subset of a time gap, a gap within current session with a price gap and zero time lag between bars.
Therefore timerange filter is useless for price gaps, but can still be applied.
On the other hand, all price gap filters can be applied to time gaps without any distinction.
- gap can have multiple modifier subtypes: (price|time) * (up|down) * (large? + volume? + doji? + timerange? + weekend?)
i.e. price + large + volume + doji or time + large + volume + timerange + doji + weekend
- the gap is always counted only once no matter how many subtype modifiers it has
- if the gap does not satisfy any of the applied flags/filters it is not shown/counted (no gap bars/chars are shown)
- gap color can depend on a combo of gap type/dir and modifier subtypes or can be shown in a single base color
- char color can only depend on gap dir (not type/modifiers) or can be shown in a single base color
- char position can also depend on gap dir (above/below) the gap candle. Alternatively you can pin chars to the top/bottom of the screen in UI Styles.
- change_by_type = true - uses gap type base colors (2 colors + optional modifiers, up to 8 colors if volume and/or large filters are enabled)
- change_by_dir = true - uses gap dir base colors (2 colors + optional modifiers, up to 8 colors if volume and/or large filters are enabled)
- both change_by_type and change_by_dir = true - uses both gap type and dir base colors (4 colors + optional modifiers, up to 16 colors if volume and/or large filters are enabled)
- both change_by_type and change_by_dir = false - uses a single base gap color (1 color)
- don't need that much colors - disable filters
- highlight bars has priority over individual gap flags, when it is false all gaps are hidden regardless of their corresponding flag settings (does not affect dim weekend gaps)
- show chars has priority over individual gap char flags, when it is false all char flags are hidden regardless of their corresponding flag settings
- price gaps are only shown/counted when show_price_gaps flag is true. The large or volume filters can be used to narrow down results further.
- time gaps are only shown/counted when show_time_gaps flag is true. The large, volume, and timerange filters can be used to narrow down results further.
- doji gaps are only shown/counted when show_doji_gaps flag is true. The doji candle size and other filters can be used to narrow down results further.
- show weekend gaps = true and dim weekend gaps = false - shows/counts weekend gaps
- show weekend gaps = true and dim weekend gaps = true - dims weekend gaps, doesn't show/count weekwend gaps
- show/dim weekend gaps do just that - show the gap if it happens on a weekend, not all weekends
- large gaps are only shown/counted when the large filter is enabled != 0. positive values 5 (>= 5), negative -5 (<=5) are used to switch between <>
- volume gaps are only shown/counted when the volume filter is enabled != 0. positive values 5 (>= 5), negative -5 (<=5) are used to switch between <>
- timerange gaps are only shown/counted when the timerange filter is enabled != 0. positive values 5 (>= 5), negative -5 (<=5) are used to switch between <>
- candle size gaps are only shown/counted when the candle size filter is enabled != 0. positive values 5 (>= 5), negative -5 (<=5) are used to switch between <>
- candle size filter is the only filter with 2 arguments, use_and_for_delta to enable AND condition for the args (OR is the default)
Good Luck! Feel free to explore and learn from the code
Renko CandlesticksRenko charts are awesome . They reduce noise by only painting a brick on the chart when price moves by a specified amount up/down. When the price reverses, it must go twice the specified amount before a brick is painted. Time is not a factor, just price movement. Sometimes however, you want the pros of a renko chart, but on a regular candlestick chart. This indicator attempts to do just that.
A band is placed around price action showing the upper and lower bounds of what would be the current renko brick. The band only goes up/down when the price action itself moves up/down by the amount you specify. There are several ways of specifying the amount:
Fixed Price Amount: As the name says, you enter the brick size amount, i.e. the amount the price has to move before being in a new brick.
% of Price: This method will calculate the amount the price has to move as a percentage of the price itself. This way as price goes up/down, your brick size will adjust accordingly. Recommended values would be around 1% or less.
% of ATR: This option will make the brick size a percentage of the Average True Range. You can specify the ATR time frame to be different from your current time frame as well as the ATR length. For instance you could be on a 10 minute chart but specify the ATR to be daily with a length of 3 and a percentage amount of 15. This would make your brick size 15% of the Average True Range for the last 3 days. Recommended values are 10 to 20%.
Use this indicator on any time frame, even the 1 minute as the renko bands span the price action the same way on any time frame easily letting you know whether or not the price has moved appreciably, regardless of how much time has passed.
You can also set alerts easily, simply set the alert to crossing and choose “Renko Candlesticks” instead of “Value”. You will then see the options for the renko upper and lower bounds.
Tested on Bitcoin with the following values:
Fixed Price Amount: 30 ($30)
% of Price: 0.45 (if Bitcoin is $7000 then the brick size would be $31.50)
% of ATR: 15%, ATR Time Frame: 1D, ATR Length: 3 (3 days)
Ease of Movement (EOM) Backtest This indicator gauges the magnitude of price and volume movement.
The indicator returns both positive and negative values where a
positive value means the market has moved up from yesterday's value
and a negative value means the market has moved down. A large positive
or large negative value indicates a large move in price and/or lighter
volume. A small positive or small negative value indicates a small move
in price and/or heavier volume.
A positive or negative numeric value. A positive value means the market
has moved up from yesterday's value, whereas, a negative value means the
market has moved down.
You can change long to short in the Input Settings
WARNING:
- For purpose educate only
- This script to change bars colors.
Ease of Movement (EOM) Strategy This indicator gauges the magnitude of price and volume movement.
The indicator returns both positive and negative values where a
positive value means the market has moved up from yesterday's value
and a negative value means the market has moved down. A large positive
or large negative value indicates a large move in price and/or lighter
volume. A small positive or small negative value indicates a small move
in price and/or heavier volume.
A positive or negative numeric value. A positive value means the market
has moved up from yesterday's value, whereas, a negative value means the
market has moved down.
WARNING:
- This script to change bars colors.
Salty GRaB Wave with Highlights for Squeeze CCI-Arrows SlowStochThis indicator shows GRaB candles and allows several moving averages to be displayed at the same time.
It uses background coloring to identify momentum shifts. Wide bands of color can be used to identify trends while short bands of color can be used to identify reversals.
It has arrows above or below the candles to show CCI values above 100 or below -100 with the arrow pointing in the direction of the momentum.
It has red background coloring to show slow stochastic Overbought ranges and dark red signals indicating a cross of the fast and slow lines.
It has green background coloring to show slow stochastic Oversold ranges and dark green signals indicating a cross of the fast and slow lines.
It has yellow background to show squeezes with additional Squeeze information shown at the bottom of the chart in the form of letters and momentum arrows.
Ichimoku-Hausky Trading systemThis is a indicator with some parts of the ichimoku and EMA. It's my first script so i have used other peoples script (Chris Moody and DavidR) as reference cause I really have no idea myself on how to script with pinescript.
Hope that is okay!
I use 20M timeframe but it should work with any timeframe! I have not tested this system much so I would really appreciate feedback and tips for better entries, settings etc..
Tenken-sen: green line
Kijun-sen: blue line
EMA: Purple
Rules:
Buy:
IF price crosses or bounce above Kijun-sen
THEN see if market has closed above EMA
IF Market has closed above EMA
THEN see if EMA is above Kijun-sen
IF EMA is above Kijun-sen
THEN buy and set trailing stop 5 pips below EMA
Sell:
IF price crosses or bounce below Kijun-sen
THEN see if market has closed below EMA
IF Market has closed below EMA
THEN see if EMA is below Kijun-sen
IF EMA is below Kijun-sen
THEN sell and set trailing stop 5 pips above EMA
Ease of Movement (EOM) This indicator gauges the magnitude of price and volume movement.
The indicator returns both positive and negative values where a
positive value means the market has moved up from yesterday's value
and a negative value means the market has moved down. A large positive
or large negative value indicates a large move in price and/or lighter
volume. A small positive or small negative value indicates a small move
in price and/or heavier volume.
A positive or negative numeric value. A positive value means the market
has moved up from yesterday's value, whereas, a negative value means the
market has moved down.
Daily/Weekly Wick (Shadow) Range📈 Detailed Guide to the Daily/Weekly Wick (Shadow) Range Indicator
This indicator is a powerful visualization tool designed to map the key price levels established during the previous trading period (either the previous day or the previous week). Instead of just showing a single line for the high and low, it highlights the entire range of the upper and lower wicks (shadows), representing the "battleground" where buyers and sellers were most active.
How It Works
The Wick (Shadow) Range indicator fetches the Open, High, Low, and Close data from the last completed daily or weekly candle and projects those levels onto your current chart. This creates two distinct colored zones.
Upper Wick (Green Zone): This area spans from the Previous High down to the top of the Previous Candle's Body. It visually represents the territory where sellers successfully pushed the price down from its peak. This entire zone can be considered a resistance area.
Lower Wick (Red Zone): This area spans from the bottom of the Previous Candle's Body down to the Previous Low. It shows where buyers stepped in to defend a price level and push it back up. This entire zone can be considered a support area.
How to Use It in Your Trading
This indicator isn't meant to give direct buy or sell signals on its own. Instead, it provides crucial context about market structure. Here are several ways to incorporate it into your strategy:
1. Identifying Key Support & Resistance
This is the indicator's primary function. The most significant levels are:
Key Resistance: The top edge of the green zone (the previous period's high).
Key Support: The bottom edge of the red zone (the previous period's low).
Look for the current price to react when it approaches these boundaries. These are high-probability areas for price to pause or reverse.
2. Watching for Price Rejection (Reversal Trading)
The colored zones are perfect for spotting rejection signals.
Bearish Rejection 📉: If the current price enters the green zone but fails to stay there, closing back below it (often forming a new wick), it's a strong sign that sellers are still in control at that level. This can be an excellent entry signal for a short position.
Bullish Rejection 📈: If the current price dips into the red zone and is quickly bought back up, it shows that buyers are actively defending that area. This can be a great entry signal for a long position.
3. Confirming Breakouts (Trend Trading)
The zones also help validate breakouts.
Bullish Breakout: If the price pushes decisively through the entire green zone and closes above the previous high, it signals that the previous resistance has been broken and the trend may continue upward.
Bearish Breakdown: If the price falls decisively through the entire red zone and closes below the previous low, it confirms that support has failed and the price may continue downward.
4. Setting Context with Timeframes
Weekly Setting: Use the "Weekly" option to identify major, significant support and resistance levels that can influence the market for the entire week. These are powerful levels for swing trading.
Daily Setting: Use the "Daily" option for intraday trading. The previous day's high and low are critical pivot points that many day traders watch.
⚙️ Indicator Settings
The indicator has one simple setting, which you can access by clicking the gear icon ⚙️ next to its name on the chart.
Select Wick Timeframe: This dropdown menu allows you to switch the indicator's calculation between the Daily and Weekly timeframe instantly.
Quantile Regression Bands [BackQuant]Quantile Regression Bands
Tail-aware trend channeling built from quantiles of real errors, not just standard deviations.
What it does
This indicator fits a simple linear trend over a rolling lookback and then measures how price has actually deviated from that trend during the window. It then places two pairs of bands at user-chosen quantiles of those deviations (inner and outer). Because bands are based on empirical quantiles rather than a symmetric standard deviation, they adapt to skewed and fat-tailed behaviour and often hug price better in trending or asymmetric markets.
Why “quantile” bands instead of Bollinger-style bands?
Bollinger Bands assume a (roughly) symmetric spread around the mean; quantiles don’t—upper and lower bands can sit at different distances if the error distribution is skewed.
Quantiles are robust to outliers; a single shock won’t inflate the bands for many bars.
You can choose tails precisely (e.g., 1%/99% or 5%/95%) to match your risk appetite.
How it works (intuitive)
Center line — a rolling linear regression approximates the local trend.
Residuals — for each bar in the lookback, the indicator looks at the gap between actual price and where the line “expected” price to be.
Quantiles — those gaps are sorted; you select which percentiles become your inner/outer offsets.
Bands — the chosen quantile offsets are added to the current end of the regression line to draw parallel support/resistance rails.
Smoothing — a light EMA can be applied to reduce jitter in the line and bands.
What you see
Center (linear regression) line (optional).
Inner quantile bands (e.g., 25th/75th) with optional translucent fill.
Outer quantile bands (e.g., 1st/99th) with a multi-step gradient to visualise “tail zones.”
Optional bar coloring: bars trend-colored by whether price is rising above or falling below the center line.
Alerts when price crosses the outer bands (upper or lower).
How to read it
Trend & drift — the slope of the center line is your local trend. Persistent closes on the same side of the center line indicate directional drift.
Pullbacks — tags of the inner band often mark routine pullbacks within trend. Reaction back to the center line can be used for continuation entries/partials.
Tails & squeezes — outer-band touches highlight statistically rare excursions for the chosen window. Frequent outer-band activity can signal regime change or volatility expansion.
Asymmetry — if the upper band sits much further from the center than the lower (or vice versa), recent behaviour has been skewed. Trade management can be adjusted accordingly (e.g., wider take-profit upslope than downslope).
A simple trend interpretation can be derived from the bar colouring
Good use-cases
Volatility-aware mean reversion — fade moves into outer bands back toward the center when trend is flat.
Trend participation — buy pullbacks to the inner band above a rising center; flip logic for shorts below a falling center.
Risk framing — set dynamic stops/targets at quantile rails so position sizing respects recent tail behaviour rather than fixed ticks.
Inputs (quick guide)
Source — price input used for the fit (default: close).
Lookback Length — bars in the regression window and residual sample. Longer = smoother, slower bands; shorter = tighter, more reactive.
Inner/Outer Quantiles (τ) — choose your “typical” vs “tail” levels (e.g., 0.25/0.75 inner, 0.01/0.99 outer).
Show toggles — independently toggle center line, inner bands, outer bands, and their fills.
Colors & transparency — customize band and fill appearance; gradient shading highlights the tail zone.
Band Smoothing Length — small EMA on lines to reduce stair-step artefacts without meaningfully changing levels.
Bar Coloring — optional trend tint from the center line’s momentum.
Practical settings
Swing trading — Length 75–150; inner τ = 0.25/0.75, outer τ = 0.05/0.95.
Intraday — Length 50–100 for liquid futures/FX; consider 0.20/0.80 inner and 0.02/0.98 outer in high-vol assets.
Crypto — Because of fat tails, try slightly wider outers (0.01/0.99) and keep smoothing at 2–4 to tame weekend jumps.
Signal ideas
Continuation — in an uptrend, look for pullback into the lower inner band with a close back above the center as a timing cue.
Exhaustion probe — in ranges, first touch of an outer band followed by a rejection candle back inside the inner band often precedes mean-reversion swings.
Regime shift — repeated closes beyond an outer band or a sharp re-tilt in the center line can mark a new trend phase; adjust tactics (stop-following along the opposite inner band).
Alerts included
“Price Crosses Upper Outer Band” — potential overextension or breakout risk.
“Price Crosses Lower Outer Band” — potential capitulation or breakdown risk.
Notes
The fit and quantiles are computed on a fixed rolling window and do not repaint; bands update as the window moves forward.
Quantiles are based on the recent distribution; if conditions change abruptly, expect band widths and skew to adapt over the next few bars.
Parameter choices directly shape behaviour: longer windows favour stability, tighter inner quantiles increase touch frequency, and extreme outer quantiles highlight only the rarest moves.
Final thought
Quantile bands answer a simple question: “How unusual is this move given the current trend and the way price has been missing it lately?” By scoring that question with real, distribution-aware limits rather than one-size-fits-all volatility you get cleaner pullback zones in trends, more honest “extreme” tags in ranges, and a framework for risk that matches the market’s recent personality.
Cumulative Returns by Session [BackQuant]Cumulative Returns by Session
What this is
This tool breaks the trading day into three user-defined sessions and tracks how much each session contributes to return, volatility, and volume. It then aggregates results over a rolling window so you can see which session has been pulling its weight, how streaky each session has been, and how sessions relate to one another through a compact correlation heatmap.
We’ve also given the functionality for the user to use a simplified table, just by switching off all settings they are not interested in.
How it works
1) Session segmentation
You define APAC, EU, and US sessions with explicit hours and time zones. The script detects when each session starts and ends on every intraday bar and records its open, intraday high and low, close, and summed volume.
2) Per-session math
At each session end the script computes:
Return — either Percent: (Close−Open)÷Open×100(Close − Open) ÷ Open × 100(Close−Open)÷Open×100 or Points: (Close−Open)(Close − Open)(Close−Open), based on your selection.
Volatility — either Range: (High−Low)÷Open×100(High − Low) ÷ Open × 100(High−Low)÷Open×100 or ATR scaled by price: ATR÷Open×100ATR ÷ Open × 100ATR÷Open×100.
Volume — total volume transacted during that session.
3) Storage and lookback
Each day’s three session stats are stored as a row. You choose how many recent sessions to keep in memory. The script then:
Builds cumulative returns for APAC, EU, US across the lookback.
Computes averages, win rates, and a Sharpe-like ratio avgreturn÷avgvolatilityavg return ÷ avg volatilityavgreturn÷avgvolatility per session.
Tracks streaks of positive or negative sessions to show momentum.
Tracks drawdowns on cumulative returns to show worst runs from peak.
Computes rolling means over a short window for short-term drift.
4) Correlation heatmap
Using the stored arrays of session returns, the script calculates Pearson correlations between APAC–EU, APAC–US, and EU–US, and colors the matrix by strength and sign so you can spot coupling or decoupling at a glance.
What it plots
Three lines: cumulative return for APAC, EU, US over the chosen lookback.
Zero reference line for orientation.
A statistics table with cumulative %, average %, positive session rate, and optional columns for volatility, average volume, max drawdown, current streak, return-to-vol ratio, and rolling average.
A small correlation heatmap table showing APAC, EU, US cross-session correlations.
How to use it
Pick the asset — leave Custom Instrument empty to use the chart symbol, or point to another symbol for cross-asset studies.
Set your sessions and time zones — defaults approximate APAC, EU, and US hours, but you can align them to exchange times or your workflow.
Choose calculation modes — Percent vs Points for return, Range vs ATR for volatility. Points are convenient for futures and fixed-tick assets, Percent is comparable across symbols.
Decide the lookback — more sessions smooths lines and stats; fewer sessions makes the tool more reactive.
Toggle analytics — add volatility, volume, drawdown, streaks, Sharpe-like ratio, rolling averages, and the correlation table as needed.
Why session attribution helps
Different sessions are driven by different flows. Asia often sets the overnight tone, Europe adds liquidity and direction changes, and the US session can dominate range expansion. Separating contributions by session helps you:
Identify which session has been the main driver of net trend.
Measure whether volatility or volume is concentrated in a specific window.
See if one session’s gains are consistently given back in another.
Adapt tactics: fade during a mean-reverting session, press during a trending session.
Reading the tables
Cumulative % — sum of session returns over the lookback. The sign and slope tell you who is carrying the move.
Avg Return % and Positive Sessions % — direction and hit rate. A low average but high hit rate implies many small moves; the reverse implies occasional big swings.
Avg Volatility % — typical intrabars range for that session. Compare with Avg Return to judge efficiency.
Return/Vol Ratio — return per unit of volatility. Higher is better for stability.
Max Drawdown % — worst cumulative give-back within the lookback. A quick way to spot riskiness by session.
Current Streak — consecutive up or down sessions. Useful for mean-reversion or regime awareness.
Rolling Avg % — short-window drift indicator to catch recent turnarounds.
Correlation matrix — green clusters indicate sessions tending to move together; red indicates offsetting behavior.
Settings overview
Basic
Number of Sessions — how many recent days to include.
Custom Instrument — analyze another ticker while staying on your current chart.
Session Configuration and Times
Enable or hide APAC, EU, US rows.
Set hours per session and the specific time zone for each.
Calculation Methods
Return Calculation — Percent or Points.
Volatility Calculation — Range or ATR; ATR Length when applicable.
Advanced Analytics
Correlation, Drawdown, Momentum, Sharpe-like ratio, Rolling Statistics, Rolling Period.
Display Options and Colors
Show Statistics Table and its position.
Toggle columns for Volatility and Volume.
Pick individual colors for each session line and row accents.
Common applications
Session bias mapping — find which window tends to trend in your market and plan exposure accordingly.
Strategy scheduling — allocate attention or risk to the session with the best return-to-vol ratio.
News and macro awareness — see if correlation rises around central bank cycles or major data releases.
Cross-asset monitoring — set the Custom Instrument to a driver (index future, DXY, yields) to see if your symbol reacts in a particular session.
Notes
This indicator works on intraday charts, since sessions are defined within a day. If you change session clocks or time zones, give the script a few bars to accumulate fresh rows. Percent vs Points and Range vs ATR choices affect comparability across assets, so be consistent when comparing symbols.
Session context is one of the simplest ways to explain a messy tape. By separating the day into three windows and scoring each one on return, volatility, and consistency, this tool shows not just where price ended up but when and how it got there. Use the cumulative lines to spot the steady driver, read the table to judge quality and risk, and glance at the heatmap to learn whether the sessions are amplifying or canceling one another. Adjust the hours to your market and let the data tell you which session deserves your focus.
Staggered Exponential PullbacksIndicator Description: Staggered Exponential Pullbacks (Final)
Core Concept
This indicator is designed to dynamically track and visualize price pullbacks from a recent high. It serves as an intelligent alert system and a tool for visualizing potential support levels that follow a predefined, non-linear logic.
Instead of a fixed percentage interval, the indicator calculates the levels based on a fixed, exponentially increasing sequence of percentages. The distance between the levels increases as the price falls further. This models a strategy where larger price movements are tolerated as a pullback deepens before the next signal level is reached. The basis for this calculation is always the highest close of the last x candles.
Key Features
This indicator goes far beyond a simple calculation, offering a range of intelligent features for professional use:
Cascading, Fixed Levels: The levels are based on a fixed sequence of percentage distances (3.0%, 3.6%, 4.3%, etc.), where each new level is calculated from the previous level.
Persistent Support Levels ("Floors"): Once an alert level is breached, it transforms into a fixed support line ("floor"). This line will never move down, even if the market high subsequently drops.
Automatic Upward Adjustment: Established floors are automatically pulled upwards when the market shows new strength and makes higher highs. A once-reached -3% floor will therefore rise with the market.
Intelligent, Self-Cleaning Reset Logic: The indicator recognizes when a pullback sequence has ended and a new one has begun. "Ghost lines" from old, irrelevant price movements are automatically removed from the chart to ensure maximum clarity.
Cascade-Proof Alerts: Even during extremely fast sell-offs that break through multiple levels in a single candle, the indicator correctly captures every single level breach.
Customizable Visualization: All key parameters, such as the lookback period and the colors of the lines, can be easily adjusted in the settings.
Visual Elements on the Chart
The Orange Line (Highest Close): This is the reference line. It always shows the highest closing price within the defined lookback period and has a step-line shape.
The 'Floor' Lines (Default: Yellow): These are solid lines that indicate which percentage levels have already been breached in the current sequence. They function as established support levels.
The 'Next Due' Line (Default: Purple): This is a step-line that displays the next expected alert level. It moves dynamically with the calculation. As soon as the price crosses this line, an alert is triggered, and it transforms into a yellow "Floor" line.
Settings (Inputs)
Number of Candles (Lookback): Defines how many past candles are used to determine the highest closing price.
Displayed Alert Levels (Max 10): Determines the maximum number of levels the indicator will calculate and display.
Color of Floors: Allows you to freely choose the color for the solid, established support lines.
Color of Next Due Line: Allows you to freely choose the color for the next, untriggered alert line.
Setting Up Alerts (Important!)
Since the indicator uses dynamic alert messages, the alert must be set up as follows:
Add the indicator to the chart.
Click the clock icon ("Alert") in the top toolbar.
In the "Condition" field, select the name of this indicator: Staggered Exponential Pullbacks.
In the second dropdown menu, you must select the option "Any alert() function call".
Message: The message box can be left empty. The indicator automatically generates a detailed message (e.g., "Price Alert: Level 2 (3.6%) reached!").
Click "Create".
You only need one single alert to cover all 10 levels.
Important Disclaimer: Not Financial Advice
This indicator is purely a technical analysis tool for visualizing price movements. The displayed lines and triggered alerts do not constitute buy or sell recommendations and are not a form of financial or investment advice. They serve for informational and analytical purposes only.
Trading decisions based on the information from this indicator are made solely at your own risk and responsibility. The author and developer of this script assume no liability for any trading losses. Always conduct your own comprehensive analysis and, if necessary, consult a qualified financial advisor before making any trading decisions.
Trend dealing rangeHi all!
This indicator will help you find the current dealing range according to the trend. If the trend is bullish the indicator will look for a range between the latest low pivot to the latest high pivot. Vice versa in a bearish trend. The code uses my new library 'FibonacciRetracement' () that has the same code as my other indicator 'Fibonacci retracement' ().
It plots 5 lines from the low to the high and labels them 0 %, 25 %, 50 %, 75 % and 100 %. A trendline can be drawn between the two pivots (dashed and gray by default). Firstly you can define the pivot lengths used, this setting is in the 'Market structure' section but it also applies to the dealing range (it defaults to 5 (left) and 2 (right)). You can show prices if you want to (shown in parantheses, off by default). You can change the default labels position (from left) and the font size (12 by default and higher up it's 7 for market structure text). Lastly you can change the alert frequency (defaults to once per bar close) and the price that has to enter a zone for alert to be sent. 'Close' means that the closing price (or current price if you change the alert frequency to all or once per bar) has to be inside the zone and 'Wick' means that the entire candle needs to be inside the zone.
It's very useful for traders to find the current dealing range and this indicator will help you to do so.
So, this indicator will give you the dealing range and basic market structure through break of structures and change of characters.
If you have any input or suggestions on future features or bugs, don't hesitate to let me know!
Best of trading luck!
Mutanabby_AI | ATR+ | Trend-Following StrategyThis document presents the Mutanabby_AI | ATR+ Pine Script strategy, a systematic approach designed for trend identification and risk-managed position entry in financial markets. The strategy is engineered for long-only positions and integrates volatility-adjusted components to enhance signal robustness and trade management.
Strategic Design and Methodological Basis
The Mutanabby_AI | ATR+ strategy is constructed upon a foundation of established technical analysis principles, with a focus on objective signal generation and realistic trade execution.
Heikin Ashi for Trend Filtering: The core price data is processed via Heikin Ashi (HA) methodology to mitigate transient market noise and accentuate underlying trend direction. The script offers three distinct HA calculation modes, allowing for comparative analysis and validation:
Manual Calculation: Provides a transparent and deterministic computation of HA values.
ticker.heikinashi(): Utilizes TradingView's built-in function, employing confirmed historical bars to prevent repainting artifacts.
Regular Candles: Allows for direct comparison with standard OHLC price action.
This multi-methodological approach to trend smoothing is critical for robust signal generation.
Adaptive ATR Trailing Stop: A key component is the Average True Range (ATR)-based trailing stop. ATR serves as a dynamic measure of market volatility. The strategy incorporates user-defined parameters (
Key Value and ATR Period) to calibrate the sensitivity of this trailing stop, enabling adaptation to varying market volatility regimes. This mechanism is designed to provide a dynamic exit point, preserving capital and locking in gains as a trend progresses.
EMA Crossover for Signal Generation: Entry and exit signals are derived from the interaction between the Heikin Ashi derived price source and an Exponential Moving Average (EMA). A crossover event between these two components is utilized to objectively identify shifts in momentum, signaling potential long entry or exit points.
Rigorous Stop Loss Implementation: A critical feature for risk mitigation, the strategy includes an optional stop loss. This stop loss can be configured as a percentage or fixed point deviation from the entry price. Importantly, stop loss execution is based on real market prices, not the synthetic Heikin Ashi values. This design choice ensures that risk management is grounded in actual market liquidity and price levels, providing a more accurate representation of potential drawdowns during backtesting and live operation.
Backtesting Protocol: The strategy is configured for realistic backtesting, employing fill_orders_on_standard_ohlc=true to simulate order execution at standard OHLC prices. A configurable Date Filter is included to define specific historical periods for performance evaluation.
Data Visualization and Metrics: The script provides on-chart visual overlays for buy/sell signals, the ATR trailing stop, and the stop loss level. An integrated information table displays real-time strategy parameters, current position status, trend direction, and key price levels, facilitating immediate quantitative assessment.
Applicability
The Mutanabby_AI | ATR+ strategy is particularly suited for:
Cryptocurrency Markets: The inherent volatility of assets such as #Bitcoin and #Ethereum makes the ATR-based trailing stop a relevant tool for dynamic risk management.
Systematic Trend Following: Individuals employing systematic methodologies for trend capture will find the objective signal generation and rule-based execution aligned with their approach.
Pine Script Developers and Quants: The transparent code structure and emphasis on realistic backtesting provide a valuable framework for further analysis, modification, and integration into broader quantitative models.
Automated Trading Systems: The clear, deterministic entry and exit conditions facilitate integration into automated trading environments.
Implementation and Evaluation
To evaluate the Mutanabby_AI | ATR+ strategy, apply the script to your chosen chart on TradingView. Adjust the input parameters (Key Value, ATR Period, Heikin Ashi Method, Stop Loss Settings) to observe performance across various asset classes and timeframes. Comprehensive backtesting is recommended to assess the strategy's historical performance characteristics, including profitability, drawdown, and risk-adjusted returns.
I'd love to hear your thoughts, feedback, and any optimizations you discover! Drop a comment below, give it a like if you find it useful, and share your results.
Market Sessions By Zcointv/ScottfdxThis code has been writted By Zcointv/Scottfdx traders
This is a Market Volatility Box Breakout Strategy designed for intraday trading on 5-minute charts.
How it Works:
Volatility Box: The strategy defines a "volatility box" by capturing the price range (High and Low) around the New York market open.
The box begins one hour before the market open and ends 30 minutes after the market open.
The High and Low of this box are locked for the rest of the day.
Breakout Entry: A trade is opened only after this session period has ended.
Long: A 5-minute candle must close above the High of the box.
Short: A 5-minute candle must close below the Low of the box.
Risk Management:
1% Risk: Each trade risks a maximum of 1% of the total account equity. The position size is calculated dynamically based on this risk.
Stop Loss: The initial stop-loss is placed just outside the opposite side of the box.
1:1 Take Profit: The target is set at a 1:1 risk-to-reward ratio.
Partial Exit & Breakeven: When the take-profit target is hit, 50% of the position is closed. The stop-loss for the remaining 50% is then immediately moved to the entry price (breakeven).
Key Features:
The strategy is limited to one trade per day.
The indicator also has options to display configurable boxes for the Tokyo and London sessions.
The High and Low levels of the volatility box are plotted on the chart for visual reference.
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
[Pandora][Swarm] Rapid Exponential Moving AverageENVISIONING POSSIBILITY
What is the theoretical pinnacle of possibility? The current state of algorithmic affairs falls far short of my aspirations for achievable feasibility. I'm lifting the lid off of Pandora's box once again, very publicly this time, as a brute force challenge to conventional 'wisdom'. The unfolding series of time mandates a transcendental systemic alteration...
THE MOVING AVERAGE ZOO:
The realm of digital signal processing for trading is filled with familiar antiquated filtering tools. Two families of filtration, being 'infinite impulse response' (EMA, RMA, etc.) and 'finite impulse response' (WMA, SMA, etc.), are prevalently employed without question. These filter types are the mules and donkeys of data analysis, broadly accepted for use in finance.
At first glance, they appear sufficient for most tasks, offering a basic straightforward way to reduce noise and highlight trends. Yet, beneath their simplistic facade lies a constellation of limitations and impediments, each having its own finicky quirks. Upon closer inspection, identifiable drawbacks render them far from ideal for many real-world applications in today's volatile markets.
KNOWN FUNDAMENTAL FLAWS:
Despite commonplace moving average (MA) popularity, these conventional filters suffer from an assortment of fundamental flaws. Most of them don't genuinely address core challenges of how to preserve the true dynamics of a signal while suppressing noise and retaining cutoff frequency compliance. Their simple cookie cutter structures make them ill-suited in actuality for dynamic market environments. In reality, they often trade one problem for another dilemma, forsaking analytics to choose between distortion and delay.
A deeper seeded issue remains within frequency compliance, how adequately a filter respects (or disrespects) the underlying signal’s spectral properties according to it's assigned periodic parameter. Traditional MAs habitually distort phase relationships, causing delayed reactions with surplus lag or exaggerations with excessive undershoot/overshoot. For applications requiring timely resilience, such as algorithmic trading, these shortcomings are often functionally unacceptable. What’s needed is vigorous filters that can more accurately retain signal behaviors while minimizing lag without sacrificing smoothness and uniformity. Until then, the public MA zoo remains as a collection of corny compromises, rather than a favorable toolbelt of solutions.
P.S.: In PSv7+, in my opinion, many of these geriatric MAs deserve no future with ease of access for the naive, simply not knowing these filters are most likely creating bigger problems than solving any.
R.E.M.A.
What is this? I prefer to think of it as the "radical EMA", definitely along my lines of a retire everything morte algorithm. This isn't your run of the mill average from the petting zoo. I would categorize it as a paradigm shifting rampant economic masochistic annihilator, sufficiently good enough to begin ruthlessly executing moving averages left and right. Um, yeah... that kind of moving average destructor as you may soon recognize with a few 'Filters+' settings adjustments, realizing ordinary EMA has been doing us an injustice all this time.
Does it possess the capability to relentlessly exterminate most averaging filters in existence? Well, it's about time we find out, by uncaging it on the loose into the greater economic wilderness. Only then can we truly find out if it is indeed a radical exponential market accelerant whose time has come. If it is, then it may eventually become a reality erasing monolithic anomaly destined for greatness, ultimately changing the entire landscape of trading in perpetuity.
UNLEASHING NEXT-GEN:
This lone next generation exoweapon algorithm is intended to initiate the transformative beginning stages of mass filtration deprecation. However, it won't be the only one, just the first arrival of it's alien kind from me. Welcome to notion #1 of my future filtration frontier, on this episode of the algorithmic twilight zone. Where reality takes a twisting turn one dimension beyond practical logic, after persistent models of mindset disintegrate into insignificance, followed by illusory perception confronted into cognitive dissonance.
An evolutionary path to genuine advancement resides outside the prison of preconceptions, manifesting only after divergence from persistent binding restrictions of dogmatic doctrines. Such a genesis in transformative thinking will catalyze unbounded cognitive potential, plowing the way for the cultivation of total redesigns of thought. Futuristic innovative breakthroughs demand the surrender of legacy and outmoded understandings.
Now that the world's largest assembly of investors has been ensembled, there are additional tasks left to perform. I'm compelled to deploy this mathematical-weapon of mass financial creation into it's rightful destined hands, to "WE THE PEOPLE" of TV.
SCRIPT INTENTION:
Deprecate anything and everything as any non-commercial member sees desirably fit. This includes your existing code formulations already in working functional modes of operation AND/OR future projects in the works. Swapping is nearly as simple as copying and pasting with meager modifications, after you have identified comparable likeness in this indicators settings with a visual assessment. Results may become eye opening, but only if you dare to look and test.
Where you may suspect a ta.filter() is lacking sufficient luster or may be flat out majorly deficient, employing rema, drema, trema, or qrema configurations may be a more suitable replacement. That's up to you to discern. My code satire already identifies likely bottom of the barrel suspects that either belong in the extinction record or have already been marked for deprecation. They are ordered more towards the bottom by rank where they belong. SuperSmoother is a masterpiece here to stay, being my original go-to reference filter. Everything you see here is already deprecated, including REMA...
REMA CHARACTERISTICS
- VERY low lag
- No overshoot
- Frequency compliant
- Proper initialization at bar_index==0
- Period parameter accepts poitive floating point numerics (AND integers!)
- Infinite impulse response (IIR) filter
- Compact code footprint
- Minimized computational overhead
Dip Hunter [BackQuant]Dip Hunter
What this tool does in plain language
Dip Hunter is a pullback detector designed to find high quality buy-the-dip opportunities inside healthy trends and to avoid random knife catches. It watches for a quick drop from a recent high, checks that the drop happened with meaningful participation and volatility, verifies short-term weakness inside a larger uptrend, then scores the setup and paints the chart so you can act with confidence. It also draws clean entry lines, provides a meter that shows dip strength at a glance, and ships with alerts that match common execution workflows.
How Dip Hunter thinks
It defines a recent swing reference, measures how far price has dipped off that high, and only looks at candidates that meet your minimum percentage drop.
It confirms the dip with real activity by requiring a volume spike and a volatility spike.
It checks structure with two EMAs. Price should be weak in the short term while the larger context remains constructive.
It optionally requires a higher-timeframe trend to be up so you focus on pullbacks in trending markets.
It bundles those checks into a score and shows you the score on the candles and on a gradient meter.
When everything lines up it paints a green triangle below the bar, shades the background, and (if you wish) draws a horizontal entry line at your chosen level.
Inputs and what they mean
Dip Hunter Settings
• Vol Lookback and Vol Spike : The script computes an average volume over the lookback window and flags a spike when current volume is a multiple of that average. A multiplier of 2.0 means today’s volume must be at least double the average. This helps filter noise and focuses on dips that other traders actually traded.
• Fast EMA and Slow EMA : Short-term and medium-term structure references. A dip is more credible if price closes below the fast EMA while the fast EMA is still below the slow EMA during the pullback. That is classic corrective behavior inside a larger trend.
• Price Smooth : Optional smoothing length for price-derived series. Use this if you trade very noisy assets or low timeframes.
• Volatility Len and Vol Spike (volatility) : The script checks both standard deviation and true range against their own averages. If either expands beyond your multiplier the market confirms the move with range.
• Dip % and Lookback Bars : The engine finds the highest high over the lookback window, then computes the percentage drawdown from that high to the current close. Only dips larger than your threshold qualify.
Trend Filter
• Enable Trend Filter : When on, Dip Hunter will only trigger if the market is in an uptrend.
• Trend EMA Period : The longer EMA that defines the session’s backbone trend.
• Minimum Trend Strength : A small positive slope requirement. In practice this means the trend EMA should be rising, and price should be above it. You can raise the value to be more selective.
Entries
• Show Entry Lines : Draws a horizontal guide from the signal bar for a fixed number of bars. Great for limit orders, scaling, or re-tests.
• Line Length (bars) : How far the entry guide extends.
• Min Gap (bars) : Suppresses new entry lines if another dip fired recently. Prevents clutter during choppy sequences.
• Entry Price : Choose the line level. “Low” anchors at the signal candle’s low. “Close” anchors at the signal close. “Dip % Level” anchors at the theoretical level defined by recent_high × (1 − dip%). This lets you work resting orders at a consistent discount.
Heat / Meter
• Color Bars by Score : Colors each candle using a red→white→green gradient. Red is overheated, green is prime dip territory, white is neutral.
• Show Meter Table : Adds a compact gradient strip with a pointer that tracks the current score.
• Meter Cells and Meter Position : Resolution and placement of the meter.
UI Settings
• Show Dip Signals : Plots green triangles under qualifying bars and tints the background very lightly.
• Show EMAs : Plots fast, slow, and the trend EMA (if the trend filter is enabled).
• Bullish, Bearish, Neutral colors : Theme controls for shapes, fills, and bar painting.
Core calculations explained simply
Recent high and dip percent
The script finds the highest high over Lookback Bars , calls it “recent high,” then calculates:
dip% = (recent_high − close) ÷ recent_high × 100.
If dip% is larger than Dip % , condition one passes.
Volume confirmation
It computes a simple moving average of volume over Vol Lookback . If current volume ÷ average volume > Vol Spike , we have a participation spike. It also checks 5-bar ROC of volume. If ROC > 50 the spike is forceful. This gets an extra score point.
Volatility confirmation
Two independent checks:
• Standard deviation of closes vs its own average.
• True range vs ATR.
If either expands beyond Vol Spike (volatility) the move has range. This prevents false triggers from quiet drifts.
Short-term structure
Price should close below the Fast EMA and the fast EMA should be below the Slow EMA at the moment of the dip. That is the anatomy of a pullback rather than a full breakdown.
Macro trend context (optional)
When Enable Trend Filter is on, the Trend EMA must be rising and price must be above it. The logic prefers “micro weakness inside macro strength” which is the highest probability pattern for buying dips.
Signal formation
A valid dip requires:
• dip% > threshold
• volume spike true
• volatility spike true
• close below fast EMA
• fast EMA below slow EMA
If the trend filter is enabled, a rising trend EMA with price above it is also required. When all true, the triangle prints, the background tints, and optional entry lines are drawn.
Scoring and visuals
Binary checks into a continuous score
Each component contributes to a score between 0 and 1. The script then rescales to a centered range (−50 to +50).
• Low or negative scores imply “overheated” conditions and are shaded toward red.
• High positive scores imply “ripe for a dip buy” conditions and are shaded toward green.
• The gradient meter repeats the same logic, with a pointer so you can read the state quickly.
Bar coloring
If you enable “Color Bars by Score,” each candle inherits the gradient. This makes sequences obvious. Red clusters warn you not to buy. White means neutral. Increasing green suggests the pullback is maturing.
EMAs and the trend EMA
• Fast EMA turns down relative to the slow EMA inside the pullback.
• Trend EMA stays rising and above price once the dip exhausts, which is your cue to focus on long setups rather than bottom fishing in downtrends.
Entry lines
When a fresh signal fires and no other signal happened within Min Gap (bars) , the indicator draws a horizontal level for Line Length bars. Use these lines for limit entries at the low, at the close, or at the defined dip-percent level. This keeps your plan consistent across instruments.
Alerts and what they mean
• Market Overheated : Score is deeply negative. Do not chase. Wait for green.
• Close To A Dip : Score has reached a healthy level but the full signal did not trigger yet. Prepare orders.
• Dip Confirmed : First bar of a fresh validated dip. This is the most direct entry alert.
• Dip Active : The dip condition remains valid. You can scale in on re-tests.
• Dip Fading : Score crosses below 0.5 from above. Momentum of the setup is fading. Tighten stops or take partials.
• Trend Blocked Signal : All dip conditions passed but the trend filter is offside. Either reduce risk or skip, depending on your plan.
How to trade with Dip Hunter
Classic pullback in uptrend
Turn on the trend filter.
Watch for a Dip Confirmed alert with green triangle.
Use the entry line at “Dip % Level” to stage a limit order. This keeps your entries consistent across assets and timeframes.
Initial stop under the signal bar’s low or under the next lower EMA band.
First target at prior swing high, second target at a multiple of risk.
If you use partials, trail the remainder under the fast EMA once price reclaims it.
Aggressive intraday scalps
Lower Dip % and Lookback Bars so you catch shallow flags.
Keep Vol Spike meaningful so you only trade when participation appears.
Take quick partials when price reclaims the fast EMA, then exit on Dip Fading if momentum stalls.
Counter-trend probes
Disable the trend filter if you intentionally hunt reflex bounces in downtrends.
Require strong volume and volatility confirmation.
Use smaller size and faster targets. The meter should move quickly from red toward white and then green. If it does not, step aside.
Risk management templates
Stops
• Conservative: below the entry line minus a small buffer or below the signal bar’s low.
• Structural: below the slow EMA if you aim for swing continuation.
• Time stop: if price does not reclaim the fast EMA within N bars, exit.
Position sizing
Use the distance between the entry line and your structural stop to size consistently. The script’s entry lines make this distance obvious.
Scaling
• Scale at the entry line first touch.
• Add only if the meter stays green and price reclaims the fast EMA.
• Stop adding on a Dip Fading alert.
Tuning guide by market and timeframe
Equities daily
• Dip %: 1.5 to 3.0
• Lookback Bars: 5 to 10
• Vol Spike: 1.5 to 2.5
• Volatility Len: 14 to 20
• Trend EMA: 100 or 200
• Keep trend filter on for a cleaner list.
Futures and FX intraday
• Dip %: 0.4 to 1.2
• Lookback Bars: 3 to 7
• Vol Spike: 1.8 to 3.0
• Volatility Len: 10 to 14
• Use Min Gap to avoid clusters during news.
Crypto
• Dip %: 3.0 to 6.0 for majors on higher timeframes, lower on 15m to 1h
• Lookback Bars: 5 to 12
• Vol Spike: 1.8 to 3.0
• ATR and stdev checks help in erratic sessions.
Reading the chart at a glance
• Green triangle below the bar: a validated dip.
• Light green background: the current bar meets the full condition.
• Bar gradient: red is overheated, white is neutral, green is dip-friendly.
• EMAs: fast below slow during the pullback, then reclaim fast EMA on the bounce for quality continuation.
• Trend EMA: a rising spine when the filter is on.
• Entry line: a fixed level to anchor orders and risk.
• Meter pointer: right side toward “Dip” means conditions are maturing.
Why this combination reduces false positives
Any single criterion will trigger too often. Dip Hunter demands a dip off a recent high plus a volume surge plus a volatility expansion plus corrective EMA structure. Optional trend alignment pushes odds further in your favor. The score and meter visualize how many of these boxes you are actually ticking, which is more reliable than a binary dot.
Limitations and practical tips
• Thin or illiquid symbols can spoof volume spikes. Use larger Vol Lookback or raise Vol Spike .
• Sideways markets will show frequent small dips. Increase Dip % or keep the trend filter on.
• News candles can blow through entry lines. Widen stops or skip around known events.
• If you see many back-to-back triangles, raise Min Gap to keep only the best setups.
Quick setup recipes
• Clean swing trader: Trend filter on, Dip % 2.0 to 3.0, Vol Spike 2.0, Volatility Len 14, Fast 20 EMA, Slow 50 EMA, Trend 100 EMA.
• Fast intraday scalper: Trend filter off, Dip % 0.7 to 1.0, Vol Spike 2.5, Volatility Len 10, Fast 9 EMA, Slow 21 EMA, Min Gap 10 bars.
• Crypto swing: Trend filter on, Dip % 4.0, Vol Spike 2.0, Volatility Len 14, Fast 20 EMA, Slow 50 EMA, Trend 200 EMA.
Summary
Dip Hunter is a focused pullback engine. It quantifies a real dip off a recent high, validates it with volume and volatility expansion, enforces corrective structure with EMAs, and optionally restricts signals to an uptrend. The score, bar gradient, and meter make reading conditions instant. Entry lines and alerts turn that read into an executable plan. Tune the thresholds to your market and timeframe, then let the tool keep you patient in red, selective in white, and decisive in green.
MacD Alerts MACD Triggers (MTF) — Buy/Sell Alerts
What it is
A clean, multi-timeframe MACD indicator that gives you separate, ready-to-use alerts for:
• MACD Buy – MACD line crosses above the Signal line
• MACD Sell – MACD line crosses below the Signal line
It keeps the familiar MACD lines + histogram, adds optional 4-color histogram logic, and marks crossovers with green/red dots. Works on any symbol and any timeframe.
How signals are generated
• MACD = EMA(fast) − EMA(slow)
• Signal = SMA(MACD, length)
• Buy when crossover(MACD, Signal)
• Sell when crossunder(MACD, Signal)
• You can compute MACD on the chart timeframe or lock it to another timeframe (e.g., 1h MACD on a 4h chart).
Key features
• MTF engine: choose Use Current Chart Resolution or a custom timeframe.
• Separate alert conditions: publish two alerts (“MACD Buy” and “MACD Sell”)—ideal for different notifications or webhooks.
• Visuals: MACD/Signal lines, optional 4-color histogram (trend & above/below zero), and crossover dots.
• Heikin Ashi friendly: runs on whatever candle type your chart uses. (Tip below if you want “regular” candles while viewing HA.)
Settings (Inputs)
• Use Current Chart Resolution (on/off)
• Custom Timeframe (when the above is off)
• Show MACD & Signal / Show Histogram / Show Dots
• Color MACD on Signal Cross
• Use 4-color Histogram
• Lengths: Fast EMA (12), Slow EMA (26), Signal SMA (9)
How to set alerts (2 minutes)
1. Add the script to your chart.
2. Click ⏰ Alerts → + Create Alert.
3. Condition: choose this indicator → MACD Buy.
4. Options: Once per bar close (recommended).
5. Set your notification method (popup/email/webhook) → Create.
6. Repeat for MACD Sell.
Webhook tip: send JSON like
{"symbol":"{{ticker}}","time":"{{timenow}}","signal":"BUY","price":"{{close}}"}
(and “SELL” for the sell alert).
Good to know
• Symbol-agnostic: use it on crypto, stocks, indices—no symbol is hard-coded.
• Timeframe behavior: alerts are evaluated on bar close of the MACD timeframe you pick. Using a higher TF on a lower-TF chart is supported.
• Heikin Ashi note: if your chart uses HA, the calculations use HA by default. To force “regular” candles while viewing HA, tweak the code to use ticker.heikinashi() only when you want it.
• No repainting on close: crossover signals are confirmed at bar close; choose Once per bar close to avoid intra-bar noise.
Disclaimer
This is a tool, not advice. Test across timeframes/markets and combine with risk management (position sizing, SL/TP). Past performance ≠ future results.
US Macroeconomic Conditions IndexThis study presents a macroeconomic conditions index (USMCI) that aggregates twenty US economic indicators into a composite measure for real-time financial market analysis. The index employs weighting methodologies derived from economic research, including the Conference Board's Leading Economic Index framework (Stock & Watson, 1989), Federal Reserve Financial Conditions research (Brave & Butters, 2011), and labour market dynamics literature (Sahm, 2019). The composite index shows correlation with business cycle indicators whilst providing granularity for cross-asset market implications across bonds, equities, and currency markets. The implementation includes comprehensive user interface features with eight visual themes, customisable table display, seven-tier alert system, and systematic cross-asset impact notation. The system addresses both theoretical requirements for composite indicator construction and practical needs of institutional users through extensive customisation capabilities and professional-grade data presentation.
Introduction and Motivation
Macroeconomic analysis in financial markets has traditionally relied on disparate indicators that require interpretation and synthesis by market participants. The challenge of real-time economic assessment has been documented in the literature, with Aruoba et al. (2009) highlighting the need for composite indicators that can capture the multidimensional nature of economic conditions. Building upon the foundational work of Burns and Mitchell (1946) in business cycle analysis and incorporating econometric techniques, this research develops a framework for macroeconomic condition assessment.
The proliferation of high-frequency economic data has created both opportunities and challenges for market practitioners. Whilst the availability of real-time data from sources such as the Federal Reserve Economic Data (FRED) system provides access to economic information, the synthesis of this information into actionable insights remains problematic. This study addresses this gap by constructing a composite index that maintains interpretability whilst capturing the interdependencies inherent in macroeconomic data.
Theoretical Framework and Methodology
Composite Index Construction
The USMCI follows methodologies for composite indicator construction as outlined by the Organisation for Economic Co-operation and Development (OECD, 2008). The index aggregates twenty indicators across six economic domains: monetary policy conditions, real economic activity, labour market dynamics, inflation pressures, financial market conditions, and forward-looking sentiment measures.
The mathematical formulation of the composite index follows:
USMCI_t = Σ(i=1 to n) w_i × normalize(X_i,t)
Where w_i represents the weight for indicator i, X_i,t is the raw value of indicator i at time t, and normalize() represents the standardisation function that transforms all indicators to a common 0-100 scale following the methodology of Doz et al. (2011).
Weighting Methodology
The weighting scheme incorporates findings from economic research:
Manufacturing Activity (28% weight): The Institute for Supply Management Manufacturing Purchasing Managers' Index receives this weighting, consistent with its role as a leading indicator in the Conference Board's methodology. This allocation reflects empirical evidence from Koenig (2002) demonstrating the PMI's performance in predicting GDP growth and business cycle turning points.
Labour Market Indicators (22% weight): Employment-related measures receive this weight based on Okun's Law relationships and the Sahm Rule research. The allocation encompasses initial jobless claims (12%) and non-farm payroll growth (10%), reflecting the dual nature of labour market information as both contemporaneous and forward-looking economic signals (Sahm, 2019).
Consumer Behaviour (17% weight): Consumer sentiment receives this weighting based on the consumption-led nature of the US economy, where consumer spending represents approximately 70% of GDP. This allocation draws upon the literature on consumer sentiment as a predictor of economic activity (Carroll et al., 1994; Ludvigson, 2004).
Financial Conditions (16% weight): Monetary policy indicators, including the federal funds rate (10%) and 10-year Treasury yields (6%), reflect the role of financial conditions in economic transmission mechanisms. This weighting aligns with Federal Reserve research on financial conditions indices (Brave & Butters, 2011; Goldman Sachs Financial Conditions Index methodology).
Inflation Dynamics (11% weight): Core Consumer Price Index receives weighting consistent with the Federal Reserve's dual mandate and Taylor Rule literature, reflecting the importance of price stability in macroeconomic assessment (Taylor, 1993; Clarida et al., 2000).
Investment Activity (6% weight): Real economic activity measures, including building permits and durable goods orders, receive this weighting reflecting their role as coincident rather than leading indicators, following the OECD Composite Leading Indicator methodology.
Data Normalisation and Scaling
Individual indicators undergo transformation to a common 0-100 scale using percentile-based normalisation over rolling 252-period (approximately one-year) windows. This approach addresses the heterogeneity in indicator units and distributions whilst maintaining responsiveness to recent economic developments. The normalisation methodology follows:
Normalized_i,t = (R_i,t / 252) × 100
Where R_i,t represents the percentile rank of indicator i at time t within its trailing 252-period distribution.
Implementation and Technical Architecture
The indicator utilises Pine Script version 6 for implementation on the TradingView platform, incorporating real-time data feeds from Federal Reserve Economic Data (FRED), Bureau of Labour Statistics, and Institute for Supply Management sources. The architecture employs request.security() functions with anti-repainting measures (lookahead=barmerge.lookahead_off) to ensure temporal consistency in signal generation.
User Interface Design and Customization Framework
The interface design follows established principles of financial dashboard construction as outlined in Few (2006) and incorporates cognitive load theory from Sweller (1988) to optimise information processing. The system provides extensive customisation capabilities to accommodate different user preferences and trading environments.
Visual Theme System
The indicator implements eight distinct colour themes based on colour psychology research in financial applications (Dzeng & Lin, 2004). Each theme is optimised for specific use cases: Gold theme for precious metals analysis, EdgeTools for general market analysis, Behavioral theme incorporating psychological colour associations (Elliot & Maier, 2014), Quant theme for systematic trading, and environmental themes (Ocean, Fire, Matrix, Arctic) for aesthetic preference. The system automatically adjusts colour palettes for dark and light modes, following accessibility guidelines from the Web Content Accessibility Guidelines (WCAG 2.1) to ensure readability across different viewing conditions.
Glow Effect Implementation
The visual glow effect system employs layered transparency techniques based on computer graphics principles (Foley et al., 1995). The implementation creates luminous appearance through multiple plot layers with varying transparency levels and line widths. Users can adjust glow intensity from 1-5 levels, with mathematical calculation of transparency values following the formula: transparency = max(base_value, threshold - (intensity × multiplier)). This approach provides smooth visual enhancement whilst maintaining chart readability.
Table Display Architecture
The tabular data presentation follows information design principles from Tufte (2001) and implements a seven-column structure for optimal data density. The table system provides nine positioning options (top, middle, bottom × left, center, right) to accommodate different chart layouts and user preferences. Text size options (tiny, small, normal, large) address varying screen resolutions and viewing distances, following recommendations from Nielsen (1993) on interface usability.
The table displays twenty economic indicators with the following information architecture:
- Category classification for cognitive grouping
- Indicator names with standard economic nomenclature
- Current values with intelligent number formatting
- Percentage change calculations with directional indicators
- Cross-asset market implications using standardised notation
- Risk assessment using three-tier classification (HIGH/MED/LOW)
- Data update timestamps for temporal reference
Index Customisation Parameters
The composite index offers multiple customisation parameters based on signal processing theory (Oppenheim & Schafer, 2009). Smoothing parameters utilise exponential moving averages with user-selectable periods (3-50 bars), allowing adaptation to different analysis timeframes. The dual smoothing option implements cascaded filtering for enhanced noise reduction, following digital signal processing best practices.
Regime sensitivity adjustment (0.1-2.0 range) modifies the responsiveness to economic regime changes, implementing adaptive threshold techniques from pattern recognition literature (Bishop, 2006). Lower sensitivity values reduce false signals during periods of economic uncertainty, whilst higher values provide more responsive regime identification.
Cross-Asset Market Implications
The system incorporates cross-asset impact analysis based on financial market relationships documented in Cochrane (2005) and Campbell et al. (1997). Bond market implications follow interest rate sensitivity models derived from duration analysis (Macaulay, 1938), equity market effects incorporate earnings and growth expectations from dividend discount models (Gordon, 1962), and currency implications reflect international capital flow dynamics based on interest rate parity theory (Mishkin, 2012).
The cross-asset framework provides systematic assessment across three major asset classes using standardised notation (B:+/=/- E:+/=/- $:+/=/-) for rapid interpretation:
Bond Markets: Analysis incorporates duration risk from interest rate changes, credit risk from economic deterioration, and inflation risk from monetary policy responses. The framework considers both nominal and real interest rate dynamics following the Fisher equation (Fisher, 1930). Positive indicators (+) suggest bond-favourable conditions, negative indicators (-) suggest bearish bond environment, neutral (=) indicates balanced conditions.
Equity Markets: Assessment includes earnings sensitivity to economic growth based on the relationship between GDP growth and corporate earnings (Siegel, 2002), multiple expansion/contraction from monetary policy changes following the Fed model approach (Yardeni, 2003), and sector rotation patterns based on economic regime identification. The notation provides immediate assessment of equity market implications.
Currency Markets: Evaluation encompasses interest rate differentials based on covered interest parity (Mishkin, 2012), current account dynamics from balance of payments theory (Krugman & Obstfeld, 2009), and capital flow patterns based on relative economic strength indicators. Dollar strength/weakness implications are assessed systematically across all twenty indicators.
Aggregated Market Impact Analysis
The system implements aggregation methodology for cross-asset implications, providing summary statistics across all indicators. The aggregated view displays count-based analysis (e.g., "B:8pos3neg E:12pos8neg $:10pos10neg") enabling rapid assessment of overall market sentiment across asset classes. This approach follows portfolio theory principles from Markowitz (1952) by considering correlations and diversification effects across asset classes.
Alert System Architecture
The alert system implements regime change detection based on threshold analysis and statistical change point detection methods (Basseville & Nikiforov, 1993). Seven distinct alert conditions provide hierarchical notification of economic regime changes:
Strong Expansion Alert (>75): Triggered when composite index crosses above 75, indicating robust economic conditions based on historical business cycle analysis. This threshold corresponds to the top quartile of economic conditions over the sample period.
Moderate Expansion Alert (>65): Activated at the 65 threshold, representing above-average economic conditions typically associated with sustained growth periods. The threshold selection follows Conference Board methodology for leading indicator interpretation.
Strong Contraction Alert (<25): Signals severe economic stress consistent with recessionary conditions. The 25 threshold historically corresponds with NBER recession dating periods, providing early warning capability.
Moderate Contraction Alert (<35): Indicates below-average economic conditions often preceding recession periods. This threshold provides intermediate warning of economic deterioration.
Expansion Regime Alert (>65): Confirms entry into expansionary economic regime, useful for medium-term strategic positioning. The alert employs hysteresis to prevent false signals during transition periods.
Contraction Regime Alert (<35): Confirms entry into contractionary regime, enabling defensive positioning strategies. Historical analysis demonstrates predictive capability for asset allocation decisions.
Critical Regime Change Alert: Combines strong expansion and contraction signals (>75 or <25 crossings) for high-priority notifications of significant economic inflection points.
Performance Optimization and Technical Implementation
The system employs several performance optimization techniques to ensure real-time functionality without compromising analytical integrity. Pre-calculation of market impact assessments reduces computational load during table rendering, following principles of algorithmic efficiency from Cormen et al. (2009). Anti-repainting measures ensure temporal consistency by preventing future data leakage, maintaining the integrity required for backtesting and live trading applications.
Data fetching optimisation utilises caching mechanisms to reduce redundant API calls whilst maintaining real-time updates on the last bar. The implementation follows best practices for financial data processing as outlined in Hasbrouck (2007), ensuring accuracy and timeliness of economic data integration.
Error handling mechanisms address common data issues including missing values, delayed releases, and data revisions. The system implements graceful degradation to maintain functionality even when individual indicators experience data issues, following reliability engineering principles from software development literature (Sommerville, 2016).
Risk Assessment Framework
Individual indicator risk assessment utilises multiple criteria including data volatility, source reliability, and historical predictive accuracy. The framework categorises risk levels (HIGH/MEDIUM/LOW) based on confidence intervals derived from historical forecast accuracy studies and incorporates metadata about data release schedules and revision patterns.
Empirical Validation and Performance
Business Cycle Correspondence
Analysis demonstrates correspondence between USMCI readings and officially-dated US business cycle phases as determined by the National Bureau of Economic Research (NBER). Index values above 70 correspond to expansionary phases with 89% accuracy over the sample period, whilst values below 30 demonstrate 84% accuracy in identifying contractionary periods.
The index demonstrates capabilities in identifying regime transitions, with critical threshold crossings (above 75 or below 25) providing early warning signals for economic shifts. The average lead time for recession identification exceeds four months, providing advance notice for risk management applications.
Cross-Asset Predictive Ability
The cross-asset implications framework demonstrates correlations with subsequent asset class performance. Bond market implications show correlation coefficients of 0.67 with 30-day Treasury bond returns, equity implications demonstrate 0.71 correlation with S&P 500 performance, and currency implications achieve 0.63 correlation with Dollar Index movements.
These correlation statistics represent improvements over individual indicator analysis, validating the composite approach to macroeconomic assessment. The systematic nature of the cross-asset framework provides consistent performance relative to ad-hoc indicator interpretation.
Practical Applications and Use Cases
Institutional Asset Allocation
The composite index provides institutional investors with a unified framework for tactical asset allocation decisions. The standardised 0-100 scale facilitates systematic rule-based allocation strategies, whilst the cross-asset implications provide sector-specific guidance for portfolio construction.
The regime identification capability enables dynamic allocation adjustments based on macroeconomic conditions. Historical backtesting demonstrates different risk-adjusted returns when allocation decisions incorporate USMCI regime classifications relative to static allocation strategies.
Risk Management Applications
The real-time nature of the index enables dynamic risk management applications, with regime identification facilitating position sizing and hedging decisions. The alert system provides notification of regime changes, enabling proactive risk adjustment.
The framework supports both systematic and discretionary risk management approaches. Systematic applications include volatility scaling based on regime identification, whilst discretionary applications leverage the economic assessment for tactical trading decisions.
Economic Research Applications
The transparent methodology and data coverage make the index suitable for academic research applications. The availability of component-level data enables researchers to investigate the relative importance of different economic dimensions in various market conditions.
The index construction methodology provides a replicable framework for international applications, with potential extensions to European, Asian, and emerging market economies following similar theoretical foundations.
Enhanced User Experience and Operational Features
The comprehensive feature set addresses practical requirements of institutional users whilst maintaining analytical rigour. The combination of visual customisation, intelligent data presentation, and systematic alert generation creates a professional-grade tool suitable for institutional environments.
Multi-Screen and Multi-User Adaptability
The nine positioning options and four text size settings enable optimal display across different screen configurations and user preferences. Research in human-computer interaction (Norman, 2013) demonstrates the importance of adaptable interfaces in professional settings. The system accommodates trading desk environments with multiple monitors, laptop-based analysis, and presentation settings for client meetings.
Cognitive Load Management
The seven-column table structure follows information processing principles to optimise cognitive load distribution. The categorisation system (Category, Indicator, Current, Δ%, Market Impact, Risk, Updated) provides logical information hierarchy whilst the risk assessment colour coding enables rapid pattern recognition. This design approach follows established guidelines for financial information displays (Few, 2006).
Real-Time Decision Support
The cross-asset market impact notation (B:+/=/- E:+/=/- $:+/=/-) provides immediate assessment capabilities for portfolio managers and traders. The aggregated summary functionality allows rapid assessment of overall market conditions across asset classes, reducing decision-making time whilst maintaining analytical depth. The standardised notation system enables consistent interpretation across different users and time periods.
Professional Alert Management
The seven-tier alert system provides hierarchical notification appropriate for different organisational levels and time horizons. Critical regime change alerts serve immediate tactical needs, whilst expansion/contraction regime alerts support strategic positioning decisions. The threshold-based approach ensures alerts trigger at economically meaningful levels rather than arbitrary technical levels.
Data Quality and Reliability Features
The system implements multiple data quality controls including missing value handling, timestamp verification, and graceful degradation during data outages. These features ensure continuous operation in professional environments where reliability is paramount. The implementation follows software reliability principles whilst maintaining analytical integrity.
Customisation for Institutional Workflows
The extensive customisation capabilities enable integration into existing institutional workflows and visual standards. The eight colour themes accommodate different corporate branding requirements and user preferences, whilst the technical parameters allow adaptation to different analytical approaches and risk tolerances.
Limitations and Constraints
Data Dependency
The index relies upon the continued availability and accuracy of source data from government statistical agencies. Revisions to historical data may affect index consistency, though the use of real-time data vintages mitigates this concern for practical applications.
Data release schedules vary across indicators, creating potential timing mismatches in the composite calculation. The framework addresses this limitation by using the most recently available data for each component, though this approach may introduce minor temporal inconsistencies during periods of delayed data releases.
Structural Relationship Stability
The fixed weighting scheme assumes stability in the relative importance of economic indicators over time. Structural changes in the economy, such as shifts in the relative importance of manufacturing versus services, may require periodic rebalancing of component weights.
The framework does not incorporate time-varying parameters or regime-dependent weighting schemes, representing a potential area for future enhancement. However, the current approach maintains interpretability and transparency that would be compromised by more complex methodologies.
Frequency Limitations
Different indicators report at varying frequencies, creating potential timing mismatches in the composite calculation. Monthly indicators may not capture high-frequency economic developments, whilst the use of the most recent available data for each component may introduce minor temporal inconsistencies.
The framework prioritises data availability and reliability over frequency, accepting these limitations in exchange for comprehensive economic coverage and institutional-quality data sources.
Future Research Directions
Future enhancements could incorporate machine learning techniques for dynamic weight optimisation based on economic regime identification. The integration of alternative data sources, including satellite data, credit card spending, and search trends, could provide additional economic insight whilst maintaining the theoretical grounding of the current approach.
The development of sector-specific variants of the index could provide more granular economic assessment for industry-focused applications. Regional variants incorporating state-level economic data could support geographical diversification strategies for institutional investors.
Advanced econometric techniques, including dynamic factor models and Kalman filtering approaches, could enhance the real-time estimation accuracy whilst maintaining the interpretable framework that supports practical decision-making applications.
Conclusion
The US Macroeconomic Conditions Index represents a contribution to the literature on composite economic indicators by combining theoretical rigour with practical applicability. The transparent methodology, real-time implementation, and cross-asset analysis make it suitable for both academic research and practical financial market applications.
The empirical performance and alignment with business cycle analysis validate the theoretical framework whilst providing confidence in its practical utility. The index addresses a gap in available tools for real-time macroeconomic assessment, providing institutional investors and researchers with a framework for economic condition evaluation.
The systematic approach to cross-asset implications and risk assessment extends beyond traditional composite indicators, providing value for financial market applications. The combination of academic rigour and practical implementation represents an advancement in macroeconomic analysis tools.
References
Aruoba, S. B., Diebold, F. X., & Scotti, C. (2009). Real-time measurement of business conditions. Journal of Business & Economic Statistics, 27(4), 417-427.
Basseville, M., & Nikiforov, I. V. (1993). Detection of abrupt changes: Theory and application. Prentice Hall.
Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
Brave, S., & Butters, R. A. (2011). Monitoring financial stability: A financial conditions index approach. Economic Perspectives, 35(1), 22-43.
Burns, A. F., & Mitchell, W. C. (1946). Measuring business cycles. NBER Books, National Bureau of Economic Research.
Campbell, J. Y., Lo, A. W., & MacKinlay, A. C. (1997). The econometrics of financial markets. Princeton University Press.
Carroll, C. D., Fuhrer, J. C., & Wilcox, D. W. (1994). Does consumer sentiment forecast household spending? If so, why? American Economic Review, 84(5), 1397-1408.
Clarida, R., Gali, J., & Gertler, M. (2000). Monetary policy rules and macroeconomic stability: Evidence and some theory. Quarterly Journal of Economics, 115(1), 147-180.
Cochrane, J. H. (2005). Asset pricing. Princeton University Press.
Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms. MIT Press.
Doz, C., Giannone, D., & Reichlin, L. (2011). A two-step estimator for large approximate dynamic factor models based on Kalman filtering. Journal of Econometrics, 164(1), 188-205.
Dzeng, R. J., & Lin, Y. C. (2004). Intelligent agents for supporting construction procurement negotiation. Expert Systems with Applications, 27(1), 107-119.
Elliot, A. J., & Maier, M. A. (2014). Color psychology: Effects of perceiving color on psychological functioning in humans. Annual Review of Psychology, 65, 95-120.
Few, S. (2006). Information dashboard design: The effective visual communication of data. O'Reilly Media.
Fisher, I. (1930). The theory of interest. Macmillan.
Foley, J. D., van Dam, A., Feiner, S. K., & Hughes, J. F. (1995). Computer graphics: Principles and practice. Addison-Wesley.
Gordon, M. J. (1962). The investment, financing, and valuation of the corporation. Richard D. Irwin.
Hasbrouck, J. (2007). Empirical market microstructure: The institutions, economics, and econometrics of securities trading. Oxford University Press.
Koenig, E. F. (2002). Using the purchasing managers' index to assess the economy's strength and the likely direction of monetary policy. Economic and Financial Policy Review, 1(6), 1-14.
Krugman, P. R., & Obstfeld, M. (2009). International economics: Theory and policy. Pearson.
Ludvigson, S. C. (2004). Consumer confidence and consumer spending. Journal of Economic Perspectives, 18(2), 29-50.
Macaulay, F. R. (1938). Some theoretical problems suggested by the movements of interest rates, bond yields and stock prices in the United States since 1856. National Bureau of Economic Research.
Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7(1), 77-91.
Mishkin, F. S. (2012). The economics of money, banking, and financial markets. Pearson.
Nielsen, J. (1993). Usability engineering. Academic Press.
Norman, D. A. (2013). The design of everyday things: Revised and expanded edition. Basic Books.
OECD (2008). Handbook on constructing composite indicators: Methodology and user guide. OECD Publishing.
Oppenheim, A. V., & Schafer, R. W. (2009). Discrete-time signal processing. Prentice Hall.
Sahm, C. (2019). Direct stimulus payments to individuals. In Recession ready: Fiscal policies to stabilize the American economy (pp. 67-92). The Hamilton Project, Brookings Institution.
Siegel, J. J. (2002). Stocks for the long run: The definitive guide to financial market returns and long-term investment strategies. McGraw-Hill.
Sommerville, I. (2016). Software engineering. Pearson.
Stock, J. H., & Watson, M. W. (1989). New indexes of coincident and leading economic indicators. NBER Macroeconomics Annual, 4, 351-394.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.
Taylor, J. B. (1993). Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy, 39, 195-214.
Tufte, E. R. (2001). The visual display of quantitative information. Graphics Press.
Yardeni, E. (2003). Stock valuation models. Topical Study, 38. Yardeni Research.
ADR Plots + OverlayADR Plots + Overlay
This tool calculates and displays Average Daily Range (ADR) levels on your chart, giving traders a quick visual reference for expected daily price movement. It plots guide levels above and below the daily open and shows how much of the day's typical range has already been covered—all in one interactive table and on-chart overlay.
What It Does
ADR Calculation:
Uses daily high-low differences over a user-defined period (default 14 days), smoothed via RMA, SMA, EMA, or WMA to calculate the average daily range.
Projected Levels:
Plots four reference levels relative to the current day's open price:
+100% ADR: Open + ADR
+50% ADR: Open + 50% of ADR
−50% ADR: Open − 50% of ADR
−100% ADR: Open − ADR
Coverage %:
Tracks intraday high and low prices to calculate what percentage of the ADR has already been covered for the current session:
Coverage % = (High − Low) ÷ ADR × 100
Interactive Table:
Shows the ADR value and today's ADR coverage percentage in a customizable table overlay. The table position, colors, border, transparency, and an optional empty top row can all be adjusted via settings.
Customization Options
Table Settings:
Position the table (top/bottom × left/right).
Change background color, text color, border color and thickness.
Toggle an empty top row for spacing.
Line Settings:
Choose color, line style (solid/dotted/dashed), and width.
Lines automatically reposition each day based on that day's open price and ADR calculation.
General Inputs:
ADR length (number of days).
Smoothing method (RMA, SMA, EMA, WMA).
How to Use It for Trading
Measure Daily Movement: Instantly know the expected daily price range based on historical volatility.
Identify Overextension: Use the coverage % to see if the market has already moved close to or beyond its typical daily range.
Plan Entries & Exits: Align trade targets and stops with ADR levels for more objective intraday planning.
Visual Reference: Horizontal guide lines and table update automatically as new data comes in, helping traders stay informed without manual calculations.
Ideal For
Intraday traders tracking daily volatility limits.
Swing traders wanting a quick reference for expected price movement per day.
Anyone seeking a volatility-based framework for planning targets, stops, or identifying extended market conditions.
Opening Range v3 (Dynamic)Opening Range Signals v3 (Dynamic) - Indicator Guide
Created by: MecarderoAurum
Why This Indicator Exists: An Overview
The "Opening Range Signals" indicator is a sophisticated tool designed for day traders who focus their strategy on the price action that unfolds during the Regular Trading Hours (RTH) of the New York session (09:30 - 16:00 ET). The opening period of the market, often called the "initial balance," is a critical time where institutions and traders establish the early high and low for the day. Trading the breakout of this range is a classic and effective strategy, but it's often plagued by false moves and "head fakes."
This indicator was built to solve that problem. It not only identifies the initial range but also incorporates a powerful dynamic expansion feature. This allows the indicator to intelligently adapt to early session volatility, filter out false breakouts, and establish more reliable support and resistance levels for the rest of the trading day. It provides a clear, visual framework for executing opening range strategies with more confidence.
Key Features & How to Use Them
1. Customizable Opening Range
This is the foundation of the indicator. It draws the high and low of the initial trading period on your chart.
What it does: Establishes the initial support and resistance levels for the day.
How to use it: In the settings under "Time Settings," you can set the "Opening Range Duration" from 1 to 30 minutes. A shorter duration (e.g., 5 minutes) will be more sensitive and give earlier signals, while a longer duration (e.g., 30 minutes) will establish a wider, more robust range.
2. Dynamic Range Expansion
This is the indicator's most powerful and unique feature. It helps you avoid getting trapped in false breakouts.
What it does: If the price breaks out of the initial range but then quickly closes back inside, the indicator will automatically expand the range to include the full wick of the failed breakout. This tells you the market is still establishing its true range.
How to use it: In the settings under "Dynamic Range," you can:
"Enable Dynamic Range Expansion": This is on by default.
"Expansion Time Limit (Min)": Set how long the indicator should look for these failed breakouts. After this time, the range will be locked for the day.
3. Clear Visual Trading Signals
The indicator provides three distinct signals to help you interpret the price action around the opening range.
Breakout Body (Yellow plotshape):
What it means: The first confirmation that the price has decisively moved outside the established range. It appears when a candle's body closes entirely above the high or below the low.
How to use it: This is your alert that a potential breakout is underway. Do not enter yet; wait for confirmation.
Continuation (Green plotshape):
What it means: This signal appears on the candle immediately following a breakout if it shows momentum in the same direction. It confirms that the breakout has strength.
How to use it: This is a potential entry trigger. A continuation signal suggests the breakout is valid and may continue.
Failure (Red plotshape):
What it means: This signal appears if, after a breakout and continuation, the price quickly reverses and closes back inside the range. It's a strong indication of a false breakout.
How to use it: If you are in a breakout trade, a failure signal is a clear sign to exit. It can also be used as a setup for a reversal trade in the opposite direction.
Sample Strategy: The Breakout-Continuation Trade
This strategy uses the indicator's signals to trade a classic opening range breakout with added confirmation.
Setup:
Set the "Opening Range Duration" to your preferred time (e.g., 5 or 15 minutes).
Ensure the "Dynamic Range Expansion" is enabled to filter out early noise.
Entry Trigger:
Wait for a Breakout signal (yellow) to appear. This puts you on high alert.
Wait for a Continuation signal (green) on the very next candle. This is your entry trigger. Enter a long trade on a bullish continuation or a short trade on a bearish continuation.
Stop-Loss:
For a bullish (long) trade, a common stop-loss placement is just below the low of the continuation candle or, for a more conservative stop, just inside the opening range high.
For a bearish (short) trade, place your stop-loss just above the high of the continuation candle or just inside the opening range low.
Trade Management:
If a Failure signal (red) appears after you've entered, it indicates the breakout has failed. This is a strong signal to exit your trade immediately to protect your capital.
If the trade moves in your favor, you can manage it by taking profits at key levels or using a trailing stop.