Zorro Trader: How Accurate is Deepnet?

Zorro Trader: How Accurate is Deepnet?

The posts I wrote about the FOREX neural network trading with Zorro Trader and DeepNet made me ask myself a question: how accurate is DeepNet in it’s predictions? Sure, 56.1% of our trades are winning, but what if DeepNet was 90% accurate, and Zorro Trader screwed up the execution? Or, what if DeepNet was only 30% accurate, but Zorro Trader got lucky and always executed the trades on favorable slippage (which is impossible in real life). Or what if, even worse, DeepNet is inaccurate, Zorro Trader is unlucky, and this is a perfectly good strategy that I was just about to discard? Or what if the Gremlins… well, you got my point. We need to know for sure what is the TRUE accuracy of the DeepNet neural network, which I suspect will be a different number than 56.1%.

At this point, some may ask why I am willing to split hairs, when a better utilization of our time would be to improve the strategy by changing the neural network architecture and parameters, or maybe feed it different data, maybe some technical indicators etc. The answer is: if the simplest prototype does not work on the simplest data in the simplest realistic scenario, like… does not work at al… than we are probably on a fool’s errand. And the simplest thing here is: is DeepNet good at predicting the market, and Zorro bad at executing, or the other way around? Or something else? We shall see…

So how do we go about testing DeepNet by itself on prediction accuracy? Well, first we un-comment this line from the strategy script, and comment out the next. Make it look like this, but don’t touch anything else:

// Deep Learning Test ///////////////////////////////////////////

#define DO_SIGNALS  // generate sample set in Train mode
//#define DEEPNET
//#define H2O 
//#define MXNET
//#define KERAS

///////////////////////////////////////////////////////////////////////

By doing this, you tell Zorro Trader to generate a data set for DeepNet, so we can use it to see what’s going on under the hood, directly in R. Zorro Trader does the same thing during the Walk Forward Optimization test, you just don’t see it, as it happens behind the scenes. Clicking the Train button of Zorro Trader will generate a file like: DeepLearn_something_L.csv (something = depending on the asset you trade.) Copy that file in your R folder, or make it available to R Studio (if that’s what you’re using). Then run the file DeepLearn.R, in R or RStudio (not with Zorro). This is what I got (chunked to what we care about):

               Accuracy : 0.6265         
                 95% CI : (0.6015, 0.651)
    No Information Rate : 0.5007         
    P-Value [Acc > NIR] : <2e-16         
                                         
                  Kappa : 0.253          
                                         
 Mcnemar's Test P-Value : 0.6432         
                                         
            Sensitivity : 0.6190         
            Specificity : 0.6340         
         Pos Pred Value : 0.6290         
         Neg Pred Value : 0.6240         
             Prevalence : 0.5007         
         Detection Rate : 0.3099         
   Detection Prevalence : 0.4927         
      Balanced Accuracy : 0.6265

It is pretty clear that the accuracy of DeepNet is actually about 10% better than the number of winning trades we’re getting. So what’s going on here? And does it matter? Well, I am not sure yet what’s going on, but it clearly matters, as 10% is no joke! It may or may not make this strategy profitable, but in general we always need to know what’s going on, especially when it’s about money…

I examined visually some of the trades Zorro Trader did, and looked at the log files as well. Clearly, Zorro Trader is trying to do a very realistic simulation of the back-test with variable and random slippage, trade entry times etc. And my guess at this time is that we’re not getting 62% winning trades because of… bad luck in execution. But… this is expected to happen in real life, so it NOT a Zorro error or bug. Actually this is just a sign of a very realistic back-test, which is to be appreciated.

Comments are closed.