Time Series Lag Reduction Filter by CryptorhythmsTime Series Lag Reduction Filter by Cryptorhythms
Description
A little filter to reduce lag on any time series data. Here we use an EMA to demonstrate how it works, but you could use it in many different ways/appications.
This method can cause overshoot if you get too aggressive with the "lagReduce" setting. In this case lower the lagReduce variable.
👍 We hope you enjoyed this indicator and find it useful! We post free crypto analysis, strategies and indicators regularly. This is our 76th script on Tradingview!
Filter
Roofing Filter [DW]This is an experimental study built on the concept of using roofing filters on price data proposed by John Ehlers.
Roofing filters are a type of bandpass filter conventionally used in HF radio receivers in the first IF stage to limit the frequency spectrum passed on to later stages in the receiver.
The goal in applying roofing filters to a price signal is to simultaneously attenuate high frequency noise and low frequency distortion to pass an oscillating signal with a nearly zero mean for analysis and/or further calculation.
In this study, there are three filter types to choose from:
-> Ehlers Roofing Filter, which passes data through a 2 pole high pass filter, then through a Super Smoother filter.
-> Gaussian Roofing Filter, which passes data through a 2 pole Gaussian high pass filter, then through a 2 pole Gaussian low pass filter.
-> Butterworth Roofing Filter, which passes data through a 2 pole Butterworth high pass filter, then through a 2 pole Butterworth low pass filter.
Each filter type has different amplitude and delay characteristics, so play around with each type and see which response suits your needs best.
There is an option to normalize the scale of the output as well. The normalization process in this script is computed by comparing positive and negative outputs to the filter's moving RMS value.
The resulting oscillator can be fed through numerous conventional indicators including Stochastic Oscillator, RSI, CCI, etc. to generate smoother, less distorted indicators for a clearer view of turning points.
Alternatively, it can also act as an indicator itself, as implied by the corresponding color scheme included in the script.
Although roofing filters are not conventionally used in the analysis of market data, applying such spectral analysis techniques may prove to be quite useful for the design of more efficient indicators and more reliable predictions.
Low Pass Channel [DW]This is an experimental study designed to attenuate higher frequency oscillations in price and volatility with minimal lag.
In this study, a single pole low pass filter is used. The low pass filter's cutoff period is determined either by a fixed user input, or by using an Instantaneous Frequency Measurement (IFM) algorithm.
Most radar warning, electronic countermeasures, and electronic intelligence systems employ IFM to identify threats, map the electronic battlefield, and implement deceptive countermeasures.
The IFM technique used for this study was devised by John Ehlers. It calculates In Phase and Quadrature (IQ) components using the Hilbert Transform and uses them to determine the dominant price cycle.
To generate the channel, the same filter approach is applied to true range then added to and subtracted from the price filter.
Custom bar colors are included for simple wave and trend indication.
Damped Sine Wave Weighted FilterIntroduction
Remember that we can make filters by using convolution, that is summing the product between the input and the filter coefficients, the set of filter coefficients is sometime denoted "kernel", those coefficients can be a same value (simple moving average), a linear function (linearly weighted moving average), a gaussian function (gaussian filter), a polynomial function (lsma of degree p with p = order of the polynomial), you can make many types of kernels, note however that it is easy to fall into the redundancy trap.
Today a low-lag filter who weight the price with a damped sine wave is proposed, the filter characteristics are discussed below.
A Damped Sine Wave
A damped sine wave is a like a sine wave with the difference that the sine wave peak amplitude decay over time.
A damped sine wave
Used Kernel
We use a damped sine wave of period length as kernel.
The coefficients underweight older values which allow the filter to reduce lag.
Step Response
Because the filter has overshoot in the step response we can conclude that there are frequencies amplified in the passband, we could have reached to this conclusion by simply seeing the negative values in the kernel or the "zero-lag" effect on the closing price.
Enough ! We Want To See The Filter !
I should indeed stop bothering you with transient responses but its always good to see how the filter act on simpler signals before seeing it on the closing price. The filter has low-lag and can be used as input for other indicators
Filter with length = 100 as input for the rsi.
The bands trailing stop utility using rolling squared mean average error with length 500 using the filter of length 500 as input.
Approximating A Least Squares Moving Average
A least squares moving average has a linear kernel with certain values under 0, a lsma of length k can be approximated using the proposed filter using period p where p = k + k/4 .
Proposed filter (red) with length = 250 and lsma (blue) with length = 200.
Conclusions
The use of damping in filter design can provide extremely useful filters, in fact the ideal kernel, the sinc function, is also a damped sine wave.
Keltner Channel with signals [ChuckBanger]This is Keltner Channel where I added Bull and Bear signals. It has a lot of settings to play around with. Have fun...
For more information on Keltner Channel: www.investopedia.com
Decaying Rate of Change Non Linear FilterThis is a potential solution to dealing with the inherent lag in most filters especially with instruments such as BTC and the effects of long periods of low volatility followed by massive volatility spikes as well as whipsaws/barts etc.
We can try and solve these issues in a number of ways, adaptive lengths, dynamic weighting etc. This filter uses a non linear weighting combined with an exponential decay rate.
With the non linear weighting the filter can become very responsive to sudden volatility spikes. We can use a short length absolute rate of change as a method to improve weighting of relative high volatility.
c1 = abs(close - close ) / close
Which gives us a fairly simple filter :
filter = sum(c1 * close,periods) / sum(c1,periods)
At this point if we want to control the relative magnitude of the ROC coefficients we can do so by raising it to a power.
c2 = pow(c1, x)
Where x approaches zero the coefficient approaches 1 or a linear filter. At x = 1 we have an unmodified coefficient and higher values increase the relative magnitude of the response. As an extreme example with x = 10 we effectively isolate the highest ROC candle within the window (which has some novel support resistance horizontals as those closes are often important). This controls the degree of responsiveness, so we can magnify the responsiveness, but with the trade off of overshoot/persistence.
So now we have the problem whereby that a highly weighted data point from a high volatility event persists within the filter window. And to a possibly extreme degree, if a reversal occurs we get a potentially large "overshoot" and in a way actually induced a large amount of lag for future price action.
This filter compensates for this effect by exponentially decaying the abs(ROC) coefficient over time, so as a high volatility event passes through the filter window it receives exponentially less weighting allowing more recent prices to receive a higher relative weighting than they would have.
c3 = c2 * pow(1 - percent_decay, periods_back)
This is somewhat similar to an EMA, however with an EMA being recursive that event will persist forever (to some degree) in the calculation. Here we are using a fixed window, so once the event is behind the window it's completely removed from the calculation
I've added Ehler's Super Smoother as an optional smoothing function as some highly non linear settings benefit from smoothing. I can't remember where I got the original SS code snippet, so if you recognize it as yours msg me and I'll link you here.
First time coding - a 5min forex Scalping strategy This is my first attempt at producing a strategy in Pine Script.
I am NOT a professional coder. I'm not even a good coder at that. I've only started Pine Script coding since September 2019. I am teaching myself.
This script is far from finished. I need to tweak a number of things about this script. Namely:
Add a validity window to the 'trigger bar' condition. Ie, I want to shut down the condition when the price closes above EMA21
Change the order entry so they are stop orders, using the stop entry price derived from the signals
Make changes to lot sizing
Add a trailing stop condition
Comments welcome, but do not expect me to reply to any questions or requests. In fact, don't expect any replies from me. I consider myself notoriously bad at replies.
I do welcome any feedback from any seasoned coders out there, as I am still a novice coder, and have so much to learn!
As to anyone who wants to criticise me - constructive and helpful criticism are most welcome, criticism to make yourself feel superior to me - you kind can eat a dk.
For the strategy rules, google the user ForexSignals TV account and look for the video "SIMPLE and PROFITABLE Forex Scalping Strategy".
Share, learn, prosper
Peace to y'all
Serialhenry
6/11/19
Fast/Slow Degree OscillatorIntroduction
The estimation of a least squares moving average of any degree isn't an interesting goal, this is due to the fact that lsma of high degrees would highly overshoot as well as overfit the closing price, which wouldn't really appear smooth. However i proposed an estimate of an lsma of any degree using convolution and a new sine wave series, all the calculation are described in the paper : "Pierrefeu, Alex (2019): A New Low-Pass FIR Filter For Signal Processing."
Today i want to make use of this filter as an oscillator providing fast entry points. The oscillator would be similar to the MACD in the sense that is consist on the difference between two filters, with one faster than the other, however unlike the MACD which use two moving averages of different length, here i'll use two filters of same length but different degrees.
The Indicator
The indicator consist in 3 elements, one main line (in green) the trigger line (in orange) and the histogram which is the difference between the green line and the red one. The main line is made from the difference between two filters of both period length and different degrees (fast, slow), fast should always be higher than slow. The signal line is just the exponential moving average of the main line, the period of the exponential moving average can be adjusted from the settings.
Both fast/slow determine the degree of the filters, higher values will create a faster filter.
For those who are curious, the filter use a kernel who estimate a polynomial function, this is how an lsma work, the kernel of an lsma of degree p is a polynomial of degree p . I achieved this estimation using a sine wave series.
When fast = 1 and slow = 0, the oscillator appear less periodic, this equivalent to : lsma - sma
Using 2/1 allow the indicator to highlight cycles more easily without being uncorrelated with the price. This is equivalent to qlsma - lsma, where qlsma is a quadratic least squares moving average. This is similar to my old indicator "Linear Quadratic Convergence Divergence Oscillator".
By default the indicator use 3 for fast and 2 for slow, but you can increase both values, here 4/3 :
In general higher values of fast/slow will create way more cyclical results, but they can be uncorrelated with the market price.
Conclusion
This indicator was rather made to show the filter calculation rather than proposing something interesting. However it can be funny to see how the difference between low lag filters create more cyclical outputs, it often allow indicators to have more predictive capabilities.
I invite you to read the paper made about the filter, codes for both pinescript and python are provided.
Smart Envelope - Running Away From The TrendIntroduction
Envelopes indicators consist in displaying one upper and one lower extremity on the price chart. They are most of the time built by adding/subtracting a volatility estimator (rolling stdev, atr, range...etc) to a central tendency estimator (SMA, EMA, LSMA...etc) . Their interpretation is often subject to debate amongst technical analyst, some will use a support and resistance methodology, where price will start a downtrend once it cross the upper extremity, and a down trend once it cross the lower one. Others will prefer a breakout methodology, where price will reach higher highs once it cross the upper extremity, and lower lows when it cross the lower one. Because of price non stationarity its hard to select the best methodology, the support and resistance one will mostly work on ranging markets, while the breakout methodology mostly work on trending ones.
Therefore new methods where proposed, instead of using moving averages with a high lag, faster filters where used, such as the least squares moving average or zero lag exponential moving average, other band indicators where also created using adaptive filters, but improvements remain relatively low. The most difficult task would be to make extremities with the ability to return accurate support and resistances levels, and today i want to provide a new way to construct such extremities by using the recursive bands framework that allow extremely creative and efficient indicators.
The Main Idea
With classical bands indicators, the upper and lower extremity will still be correlated with the main trend, the problem behind such method is that we can't use a support and resistance methodology with trending markets, the fact that reversals exist tells us that our extremities will always be crossed by the main trend, here is an example :
Here the support is correlated with the main trend, in order for it to be accurate we must assume the trend will go on for ever, and will only detect higher lows, this is what we expect with the orange line, but we can see that a severe down trend totally destroy our plan.
In short we need to give some headroom to our extremities, and thus one extremity can't be correlated with the main trend.
The proposed Indicator
We want to minimize the correlation between the extremities, so if the upper extremity rise, the lower one must fall. This allow to give some headroom and allow the user to anticipate larger movements, this is how bands seeking to give support and resistances points should work.
The indicator has a length setting that control the wideness of the extremities, unlike other indicators low values such as 14 can still create really wide bands, take that into account.
length = 5. Lower length values allow for more motion from the extremities, but does not necessarily involve detecting shorter terms support and resistances levels. The factor setting is not that important, but it allow to return extremities with more motion when high, and really wide bands when below 1 and greater than 0.
Central Tendency Estimator
Something fun with the recursive band framework is that the bands are no longer based on the central tendency estimator but its the central tendency estimator who is based on the bands. The central tendency estimator can also provide support and resistances points with the price, like classical moving averages, altho its lack of motion is this time a downside.
Conclusion
Altho the extremities are more accurate than other band indicators, the problem remain the same, larger trend will always break the extremities and continue creating higher/lower highs/lows, at this point our stop loss would certainly be triggered. This is a huge downsides of contrarian strategy, we sure might anticipate reversals earlier, but we are exposed to larger price movements, therefore the risk is extreme.
But the proposed methodology might still prove useful to develop more robust support and resistances levels based on envelopes indicators.
Thanks for reading !
G-Channels - Efficient Calculation Of Upper/Lower ExtremitiesIntroduction
Channels indicators are widely used in technical analysis, they provide lot of information. In general, technical indicators giving upper/lower extremities are calculated by adding/subtracting a volatility component to a central tendency estimator. This is the case with Bollinger bands, using the rolling standard deviation as volatility estimator and the simple moving average as central tendency estimator, or the Keltner channels using the exponential moving average and the average true range.
Lots and lots and lots (i can go on) of those indicators have been made, they only really need a central tendency estimator, which can be obtained from pretty much any filter, however i find interesting to focus on the efficiency of those indicators, therefore i propose a super efficient channel indicator using recursion. The average resulting from the upper/lower extremity of the indicator provide a new efficient filter similar to the average highest/lowest.
The calculation - How Does It Works
Efficiency is often associated to recursion, this would allow us to use past output values as input, so how does the indicator is calculated? Lets look at the upper band calculation :
a := max(src,nz(a(1))) - nz(a(1) - b(1))/length
src is the closing price, a is upper extremity, b is the lower one. Here we only need 3 values, the previous values of a and b and the closing price. Basically a := max(src,nz(a(1))) mean :
if the closing price is greater than the precedent value of a then output the closing price, else output the precedent value of a
therefore a will never be inferior to its precedent value, this is useful for getting the maximum price value in our dataset however its not useful to make an upper band, therefore we subtract this to a correction factor defined as the difference between a and b , this force the upper band to have lower values thus acting like a band without loosing its "upper" property, a similar process is done with the lower band.
Of course we could only use 2 values for making the indicator, thus ending with :
a := max(src,nz(a(1))) - nz(abs(close - a(1))/length
In fact this implementation is the same as the one proposed in my paper "Recursive Bands - A New Indicator For Technical Analysis", its also what i used for making the indicator "Adaptive Trailing Stop", this would be more efficient but i used the difference between the upper and lower extremities for a reason.
The Central tendency Estimator
This is the reason why i didn't implemented a more efficient version. Basically this central tendency estimator is just the average between the upper and lower extremities, it behave like the average of the highest/lowest over length period, its central plot in the Donchian channel indicator. Below is a comparison of both with length = 100 :
But why is our average so "boxy"? The extremities are not boxy, so why the average is sometimes equal to its previous value? Explain!
Its super easy to understand, imagine two lines, if their absolute change is the same and they follow an opposite direction, then their average is constant.
the average of the green and red line is the orange line. If both lines follow the same direction then their average will also follow this direction.
When both extremities follow the same direction, the average will also do the same, when both follow an opposite direction then the average will be equal to its precedent value, this is also due to the fact that both extremities are based on the same correction factor (a-b) , else the average wouldn't act that way, now you understand why i made this choice.
Conclusion
I proposed an efficient implementation of a channel indicator that provide an interesting central tendency estimator. This simple implementation would allow for tons of interesting concepts, some of my indicators use a similar approach and allow for great outputs, you'll see them soon enough. I hope this indicator find its use in the community, remember to ask before using this indicator in a script you want to publish.
Thanks for reading !
If you want to discuss about anime stuff send me a pm but don't do it in the commend section.
Blackman Filter - The Smoother The BetterIntroduction
Who doesn't like smooth things? I'd like a smooth market price for christmas! But i can't get it, instead its so noisy...so you apply a filter to smooth it, such filters are called low-pass filters, they smooth and its great but they have lag, so nobody really use them, but they are pretty to look at.
Its on a childish note that i will introduce this indicator, so what it is all about? I propose a new FIR filter using a blackman function as filter kernel for financial time-series smoothing, do you prefer the childish tone ? Fear not its surprisingly easy!
The Blackman Function
The blackman function look like a bell shaped curve, look:
The blackman function will produce such curve. This function is called a cosine sum function because she is based on the sum of cosine functions, here only 2.
0.42 - 0.5 * cos(2 * pi * k) + 0.08 * cos(4 * pi * k)
Originally you use this function for windowing , what does it means? In signal processing you have a function called sync function , if you use this function as filter kernel you would get the ideal frequency domain response filter, sometime called brickwall filter, it would be extremely smooth.
Above the optimal low pass filter frequency response.
However the sync function has no ending values and goes on forever, therefore we can't use it for convolution, expect if we apply windowing. Filters using windowing are called windowed-sinc filters, i will describe the procedure below :
1 - Create a sync function = sin(pi*n)/(pi*n)
2 - Truncate it = I only keep the first length points of the sync function.
This create a abrupt end, the frequency of a filter using step 1 as kernel would contain ripples in the pass band and stop band, this is bad! The frequency response would look like this :
3 - I multiply my values of step 2 by a window function, it can the blackman window, i no longer have an abrupt end, its smooth!
The frequency response of the filter using this kernel would no longer have ripples! This is the power of windowing functions.
Here we are not using such thing, but we could in the future. Here instead we use the blackman function as filter kernel, because this function is bell shaped this mean that the filter will certainly be smooth (symmetrical weighting is a rule of thumb for kernels when we want really smooth filters).
The Filter
This filter is quite smooth, unlike the gaussian filter this filter give less weights to recent and past values, this is because the blackman function has fatter tails than the gaussian one. I could make a comparison of both, however they are quite alike, if you often use a gaussian filter its up to you to decide which one you prefer.
The filter can do a better job than the moving average when it comes to preserve the frequency components that constitute the cycles/trend.
We can see that the filter has a greater performance when it comes to keep the shape of the market price, thus it has a slightly better fit.
Conclusion
Ok so in this post you learned a bit about the sync function and windowing, those are basic subjects in signal processing, they allow us to approximate the filter with the ideal frequency response, i also showed you that those windowing function could be used as kernel and that they where pretty smooth on their own, there are many others, but the one i prefer is the blackman windowing function.
I know what you are thinking, "we want trailing stops, alerts, colors, arrows!", and i understand you pal, but sometimes its cool to take a break from all this stuff. However i can tell that i'am working on a side project that aim to estimate rolling maximum/minimum as fast as possible, any experiments will be published here, and i can ensure you that those indicators will make your day quite brighter, we will see that soon.
I hope you learned something from this post! I'am a bit tired (look i'am disappearing !)
Thanks for reading !
Running Equity - A New Indicator For Optimal Markets DetectionIntroduction
Winning trades and gaining profits in trading is not impossible, however having gross profits superior to gross losses is what make trading challenging, it is logical to think that it is better to open a position when the probability of winning the trade is high, such probability can’t be measured with accuracy but a lot of metrics have been proposed in order to help determining when to open positions, technical analysis support the fact that a trending market is the best market condition for opening a position, which is logical when using a trend following strategy, therefore a long-term positive auto-correlated market is optimal for trading, this is why this paper present a new method for detecting optimal markets conditions in order to open a position.
The Indicator
The proposed indicator is based on the assumption that positive returns using a trend following strategy are a strong indication of trend strength, the proposed indicator is built from the conditions of a simple SMA cross trend following strategy, which are to go long when price > SMA and to go short when price < SMA. Then the equity from those conditions is built, in order to provide a more flexible indicator, length control the period of the sum.
When the indicator is positive it means that the market allow for potential returns, it can thus be considered being trending. Else a negative value of the indicator indicate a ranging market that won't allow for returns.
Filtering Bad Trades
The indicator can be used to filter bad trades entries, in this example a Bollinger band breakout strategy is used, without any changes the strategy return the following equity on EURUSD
The proposed indicator is then applied with the following conditions : buy and sell only if Req > 0
With an indicator period = 100 we filtered unprofitable trades.
Conclusion
I presented a new indicator for the detection of optimal markets based on a running equity. I hope both indicators may find applications in technical analysis and help investors get pertinent outputs from them.
it would mean a lot if you could read the original paper : figshare.com
Forecast Oscillator (ps4)This is a scaled version of a Forecast Oscillator, which may be used as a standalone indicator or as a filter. Scaling allows to reduce data to a standard interval, say, 0..1 or -1..1. Oftentimes, it also makes data more contrastive.
Smart Labelling - Range FilterThis is a labelling module based on a range filter . Notice that the trick here is to use fibonachi numbers . Use smaller range multiplier for higher TFs. This module may serve as a signal generator to be passed through a signal filter.
Quote from the original author:
This is an experimental study designed to filter out minor price action for a clearer view of trends. Inspired by the QQE's volatility filter, this filter applies the process directly to price rather than to a smoothed RSI. First, a smooth average price range is calculated for the basis of the filter and multiplied by a specified amount. Next, the filter is calculated by gating price movements that do not exceed the specified range. Lastly the target ranges are plotted to display the prices that will trigger filter movement.
Hybrid Convolution FilterIntroduction
Today i propose an hybrid filter that use a classical FIR architecture while using recursion. The proposed method aim to reduce the lag generated by fir filters. This particular filter is a sine weighted moving average, but you can change it since the indicator is built with the custom filter template (1). Even if it use recursion it still is a FIR filter since the impulse response is finite.
The Indicator
In red the hybrid swma and in blue the classic swma of both the same period. The difference can be seen.
The switch between the input price and the past values of the previous convolution values is made by using exponential averaging, the window function is the same as f(x) in the code.
Any filter can use this architecture, the indicator is built around the custom fir template, see (1)
Conclusion
I presented a FIR filter using recursion in its calculation, the integration is made with respect to the proposed template, therefore any user can simply modify f(x) to have different filter without the need to make any change. However curious users might want to change the window function of the exponential averager, in order to do so change sgn = f(i/length) in line 11 for sgn = fun(i/length) where fun is your custom function, make sure to add it at the start of the script where all the other functions declarations are.
Thanks for reading !
(1)
Template For Custom FIR Filters - Make Your Moving AverageIntroduction
FIR filters (finite impulse response) are widely used in technical analysis, there is the simple or arithmetic moving average, the triangular, the weighted, the least squares...etc. A FIR filter is characterized by the fact that its impulse response (the output of a filter using an impulse as input) is finite, this mean that the impulse response won't have infinite outputs unlike IIR filters.
They are extremely simple to design to, even without the Fourier transform, this is why i post this template that will let you create custom filters from step responses. Don't hesitate to post your results.
How It Works
Originally you create your filters from the frequency response you want your filter to have, this is because the inverse Fourier transform of the frequency response is the filter impulse response.
After that step you use convolution (convolution is the sum of the product between the signal and the impulse response) and you will have your filter. But we don't have Fourier transforms in pine so how can we possibly make FIR filters from convolution ? Well here the thing, the impulse response is the derivative of the step response and the step response is the sum of the impulse response, this mean we can create filters from step responses.
Step response of a moving average.
Step responses are easy to design, you just need a function that start at 0 and end up at 1.
How To Use The Template
All the work is done for you, the only thing you need to do is to enter your function at line 5 :
f(x)=> your function
For example if you want your filter to have a step response equal to sqrt(x) just enter :
f(x)=> sqrt(x)
This will give the following filter output :
You can create custom step responses from online graphing tools like fooplot or wolfram alpha, i recommend fooplot.
You can also design your filter step response from the line 14/15/16, b will be your filter step response, just use a , for example b = pow(a,2) , then replace output in plot by b and use overlay false, you can also plot step , if you like your step response copy the content of b and paste after f(x) => .
Filter Characteristics
The impulse response determine how many of a certain signal you want in your filter, this is also called weighting, you can think of filter design as cooking where your ingredients are the the signal at different periods and the impulse response determine how many of an ingredient you must include in the recipe. The step response can also tell you about your filter characteristics, for example :
This one converge faster to the step function, this mean that the filter will have less lag.
However this one converge slower to the step function, this mean the filter might have more lag but could be smoother.
Be aware that you must find a good weighting balance, else you can have output equals to the signal or just a delayed version of the signal without smoothing.
Real Case
Lets design a sine weighted moving average (swma), this FIR filter use the first 180 degrees of a sine wave function as impulse response.
Impulse response of the swma.
We can design it from the step response without much problems, remember that the impulse response is the derivative of the step response, therefore the derivative of the step response is equal to the first 180 degrees of a sine wave, the derivative of the cosine function is a sine function, therefore :
f(x)=> .5*(1 - cos(x*pi))
And voila.
Designing A BandPass Filter
The bandpass filter like a low-pass and high pass filter, you can think of it as a smooth oscillator.
To design a bandpass filter your step response must be bell shaped, or starting at 0 and ending at 0, for example :
f(x)=>sin(x*pi) give :
Conclusion
Just use fooplot and experiment, you could get nice filters, i will try to post some using this template but it would be really nice to have other people use it. If you need further help pm me.
Thanks for reading !
Fisher Least Squares Moving AverageIntroduction
I already estimated the least-squares moving average numerous times, one of the most elegant ways was by rescaling a linear function to the price by using the z-score, today i will propose a new smoother (FLSMA) based on the line rescaling approach and the inverse fisher transform of a scaled moving average error with the goal to provide an alternative least-squares smoother, the indicator won't use the correlation coefficient and will try to adresses problems such as overshoots and lag reduction.
Line Rescaling Method
For those who did not see my least squares moving average estimation using the line rescaling method here is a resume, we want to fit a polynomial function of degree 1 to the price by reducing the sum of squares between the price and the filter, squares is a term meaning the squared difference between the price and its estimation. The line rescaling technique work as follow :
1 - get the z-score of a line.
2 - multiply this z-score with the correlation between the price and a line.
3 - multiply the precedent result with the standard deviation of the price, then sum that to a simple moving average.
This process is shorter than the classical least-squares moving average method.
Z-Score Derivation And The Inverse Fisher Transform
The FLSMA will use a similar approach to the line rescaling technique but instead of using the correlation during step 2 we will use an alternative calculated from the error between the estimate and the price.
In order to do so we must use the inverse fisher transform, the inverse fisher transform can take a z-score and scale it in a range of (1,-1), it is possible to estimate the correlation with it. First lets create our modified z-score in the form of : Z = ma((y - Y)/e) where y is the price, Y our output estimate and e the moving average absolute error between the price and Y and lets call it scaled smoothed error , then apply the inverse fisher transform : r = IFT(Z) = tanh(Z) , we then multiply the z-score of the line with it.
Performance
The FLSMA greatly reduce the overshoots, this mean that the maximas of abs(r) are lower than the maxima's of the absolute correlation, such case is not "bad" but we can see that the filter is not closer to the price than the LSMA during trending periods, we can assume the filter don't reduce least-squares as well as the LSMA.
The image above is the running mean of the absolute error of each the FLSMA (in red) and the LSMA (in blue), we could fix this problem by multiplying the smooth scaled error by p where p can be any number, for example :
z = sma(src - nz(b ,src),length)/e * p where p = 2
In red the FLSMA and in blue the FLSMA with p = 2 , the greater p is the less lag the FLSMA will have.
Conclusion
It could be possible to get better results than the LSMA with such design, the presented indicator use its own correlation replacement but it is possible to use anything in a range of (1,-1) to multiply the line z-score. Although the proposed filter only reduce overshoots without keeping the accuracy of the LSMA i believe the code can be useful for others.
Thanks for reading.
SVAMA - A Non Parametric Adaptive Moving Average Based On VolumeIntroduction
Technical indicators often have parameters settings that the user must enter, those are inconvenient when the user must design a strategy because such settings must be optimized, it must also been noted that the optimal settings at time t could change at time t+n , this is why non parametric indicators are more efficient. Today i propose a moving average adapting to the market volume without using parameters affecting the smoothing.
The Indicator
The volume is rescaled in a range of (1,0) by using max or min normalization. Exponential averaging is used to provide the moving average.
When using max normalization the moving average react faster when the volume is closer to its all time high, when using min normalization the moving average react faster when the volume is closer to its all time low. You can select the method (max or min) from the "Method" parameter.
Volume tend to be higher and more periodic with higher time-frames, this is why lower time-frames might return smoother results when using the Max method. It is recommended to use the Max method when we want a faster moving average while the Min method is more suited to get a slower moving average.
Both methods can provide an interesting MA-Cross system when used on higher time frames.
Conclusion
There should be more non parametric indicators, this would allow for faster and easier optimization processes when creating a strategy, in theory any indicator using a moving average or highest/lowest could be made non parametric by using a running mean or running max/min but the indicator might loose important information.
This is one of my main focus right now since such indicators could also allow for improvements when used with artificial intelligence. I hope you find an use to it, don't hesitate to send me your suggestions.
Thanks for reading !
Adaptive Autonomous Recursive Trailing StopIntroduction
Trailing stop are important indicators in technical analysis, today i propose a new trailing stop A2RTS based on my last published indicator A2RMA (1), this last indicator directly used an error measurement thus providing a way to create enveloppes, which provide a direct way to create trailing stops based on highest/lowest rules.
The Indicator
If you need a more detailed explanation of this indicator i encourage you to check the A2RMA indicator post i made, parameters does not differ from the supertrend, thus having a length parameter and a factor parameter who is here described as gamma , gamma control how far away are the bands from each others thus spotting longer terms trends when gamma is higher.
On BTCUSD
Something worth mentioning is that the indicator sometimes behave like my MTA trailing stop indicator (2) who is closer to the price when a trend persist thus providing early exit points, however A2RTS behave a bit better.
Price can sometimes break the trailing stop, this can be interpreted as a support/resistance or just as an exit point, the support resistance methodology on trailing stop is not the most recommended.
Sometimes it is recommended to have an higher length rather than an high gamma like in this case for INTEL CORP, below gamma = 3 and length = 20
The microprocessor market like to use higher length's instead of higher gamma's , A2RMA is a non-linear filter, this would explain such behaviour.
Conclusion
Trailing stops might not suffer as much from whipsaw trades than MA crossovers but they still remain inefficient when market is not trending, results of the proposed indicator on major forex pairs are more than disappointing, but i hope this will serve as basis for other trailing stops that might act a little bit better. I conclude this post by thanking everyone who support my work and i encourage you to modify this indicator and share it with the community.
Thanks for reading !
Cited Articles
Adaptive Autonomous Recursive Moving AverageIntroduction
Using conditions in filters is a way to make them adapt to those, i already used this methodology in one of my proposed indicators ARMA which gave a really promising adaptive filter, ARMA tried to have a flat response when dealing with ranging market while following the price when the market where trending or exhibiting volatile movements, the filter was terribly simple which is one of its plus points but its down points where clearly affecting its performance thus making it almost impractical.
Today i propose a new filter A2ARMA which aim to correct all the bad behaviours of ARMA while having a good performance on various markets thanks to the added adaptivity.
Fixes And Changes
ARMA was dealing with terribles over/under-shoots which affected its performance, adding a zero-lag option made the thing even worse, in order to fix those mistakes i first cleaned the code, then i removed the offset for src in d , this choice is optional but the filter is sometimes more accurate this way.
The major change is the use of an adaptive moving average instead of the triangular moving average that smoothed the output, this adaptive moving average is calculated using exponential averaging while using the efficiency ratio as smoothing variable, this choice surprisingly removed the majority of overshoots while adding more adaptivity to the filter.
The Indicator
The Indicator work the same way as ARMA, not reacting during flat market periods while following the price when this one is volatile or trending. length control the smoothing amount while gamma determine how the filter is affected during flat market periods, gamma = 0 is just a double smoothed adaptive moving average, higher values of gamma will filter flat markets with a certain degree.
On Intel Corp with gamma = 0, i want to filter the flat period starting at July 10, gamma = 3 will certainly help us on this task.
Hooray, the problem appear to be solved ! Lower values of gamma also produce desirable effect as shown below :
gamma = 2
So far so good, but gamma or length might have different optimal values depending on the market, also problems still exists as shown here :
Seagate is tricky, gamma at 2.4 might help
The relationship between length and gamma is somewhat complicated.
On Different Markets
While some filters will process market price the same way no matter the market they are affected, A2ARMA will change drastically depending of the market.
On AMD
On EURUSD
On BTCUSD
Comparison With ARMA
ARMA with parameters roughly matching A2RMA, overall most of the problems i wanted to fix where indeed fixed.
Conclusion
A huge thanks for the support i received during this "Blank Page" period i'am suffering, ARMA was an indicator i really wanted to further develop without giving up on the code simplicity and i think this version might provide useful results, we can also notice that the decision making is easier with this version of the indicator thanks to the added coloring (which would have been impossible with ARMA).
My work don't have license attached to it, feel free to modify and share your findings, mentioning is appreciated :)
Thanks for reading !
R2-Adaptive RegressionIntroduction
I already mentioned various problems associated with the lsma, one of them being overshoots, so here i propose to use an lsma using a developed and adaptive form of 1st order polynomial to provide several improvements to the lsma. This indicator will adapt to various coefficient of determinations while also using various recursions.
More In Depth
A 1st order polynomial is in the form : y = ax + b , our indicator however will use : y = a*x + a1*x1 + (1 - (a + a1))*y , where a is the coefficient of determination of a simple lsma and a1 the coefficient of determination of an lsma who try to best fit y to the price.
In some cases the coefficient of determination or r-squared is simply the squared correlation between the input and the lsma. The r-squared can tell you if something is trending or not because its the correlation between the rough price containing noise and an estimate of the trend (lsma) . Therefore the filter give more weight to x or x1 based on their respective r-squared, when both r-squared is low the filter give more weight to its precedent output value.
Comparison
lsma and R2 with both length = 100
The result of the R2 is rougher, faster, have less overshoot than the lsma and also adapt to market conditions.
Longer/Shorter terms period can increase the error compared to the lsma because of the R2 trying to adapt to the r-squared. The R2 can also provide good fits when there is an edge, this is due to the part where the lsma fit the filter output to the input (y2)
Conclusion
I presented a new kind of lsma that adapt itself to various coefficient of determination. The indicator can reduce the sum of squares because of its ability to reduce overshoot as well as remaining stationary when price is not trending. It can be interesting to apply exponential averaging with various smoothing constant as long as you use : (1- (alpha+alpha1)) at the end.
Thanks for reading
Ratio OCHL Averager - An Alternative to VWAPIntroduction
I had the idea to make this indicator thanks to @dpanday with the support of @Coppermine and @Reika. Vwap is a non parametric indicator based on volume used by lot of traders and institutions, its non parametric particularity makes it great because it don't need to go through parameter optimization. Today i present a similar indicator called Ratio OCHL Averager based on exponential averaging by using the ratio of open-close to high-low range by using monthly high/low.
The Indicator
The indicator can more recursive by checking the "recursive" option, this allow to use the indicator output instead of the open price for the calculation of the ratio of open-close to high-low range. The result is a more reactive estimation,
The indicator reactivity change based on the time frame you are in, using higher time frame result in a more reactive indicator, however it is way less reactive than the vwap, this is a personal choice since i wanted this indicator to be smooth even with high time frames, if you want to change that you use another resolution for H and L in line 5,6.
Conclusion
I presented an alternative to vwap based on the Ratio OCHL indicator. I hope you like it and thanks for reading !
Thanks to Coppermine and Reika for the support during the creation of the indicator