Forex Data, Simple System

749 views
Skip to first unread message

Peter Henry

unread,
Nov 15, 2015, 8:29:07 AM11/15/15
to convnetjs
Hello 

I would like to start a small project to evaluate whether Deep Q can helpful  in system development for the Forex markets. the GOAL of the system is highest pips/points over time


System has 1 of 3 options 

a) Buy opening price of traded time period and exit at close 
b) Sell open price of traded time period and exit at close
c) Do nothing for traded time period

Only one trade allowed per bar 


Can you guide provide assistance in doing this I can provide over 20 years of Forex data for testing, The idea to see if Network Q would be helpful in trading system development

Thank you 

Peter Henry 


GBPUSD1440.csv

Dan Bikle

unread,
Nov 16, 2015, 4:09:57 AM11/16/15
to convnetjs
Peter,

I may be able to help you with this.

I have built a system near what you describe.

The ML tech is done with sklearn which is implemented with Python.

Take a look and send me e-mail with any questions you may have.

convnetjs is JavaScript technology which might be a better fit for predicting Forex but that is open for debate.

Visual clues to my system can be found at this URL:

www.forex611.com

Dan Bikle
Lead Developer, Forex611.com

Dan Bikle

unread,
Nov 16, 2015, 3:32:22 PM11/16/15
to convnetjs
Group,

I like Peter's train of thought.

My current system implements the idea of constantly learning.

It looks back in time x number of observations and learns from them.

It ignores any observation older than x.

Also my system implements the idea of:
  - calculate prediction
  - open position
  - wait set amount of time
  - close position

One variation Peter offers is the idea of open position of 0 contracts which is the same as do nothing.

In addition to ML technology, I do web development.

Group, I am curious about:
What do you see as a user interface for the system Peter imagines?

Dan Bikle
Lead Developer
Forex611.com

Peter Henry

unread,
Nov 17, 2015, 2:31:17 PM11/17/15
to Dan Bikle, convnetjs
Hello Group

See sample of the Forex plan 

The Reward,  maximum pips/ points over time

 Options

1) Buy (Long) open exit close
2) Sell (Short) exit close
3 Do nothing

Note only one position per price bar,
The evaluation of basic system as follows

1) Percentage accuracy
2) Average profit per trader
3) Total Pips gained over time



Thank you

Regards

Peter



--
You received this message because you are subscribed to a topic in the Google Groups "convnetjs" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/convnetjs/QMbTOdhsnRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to convnetjs+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Design Forex Java.pdf

Peter Henry

unread,
Nov 20, 2015, 2:16:49 AM11/20/15
to convnetjs

Group


I'm not a programmer however I'm learning the concepts slowly,
 I have attached a sample on what I think the code should look like,  
I have  highlighted in yellow variation, any further assistance would be appreciated 



 

Sample High Low close Date GBPUSD

 

Date

Open

High

Low

Close

Volume

1993.05.12

00:00

1.537

1.5445

1.529

1.5338

2781

1993.05.13

00:00

1.5328

1.536

1.518

1.5225

2571

1993.05.14

00:00

1.5228

1.5415

1.52

1.5387

2711

1993.05.17

00:00

1.5365

1.546

1.5309

1.5355

2921

1993.05.18

00:00

1.535

1.538

1.5237

1.5365

2711

1993.05.19

00:00

1.535

1.5482

1.5328

1.5432

2261

1993.05.20

00:00

1.5425

1.5603

1.5383

1.5565

3001

1993.05.21

00:00

1.5548

1.5592

1.539

1.5425

2811

1993.05.24

00:00

1.54

1.545

1.5289

1.5365

2871

1993.05.25

00:00

1.5385

1.547

1.5345

1.542

2151

1993.05.26

00:00

1.542

1.5505

1.541

1.5472

1381

1993.05.27

00:00

1.5453

1.565

1.5425

1.562

2831

1993.05.28

00:00

1.561

1.568

1.5535

1.5607

2871

1993.05.31

00:00

1.5605

1.563

1.556

1.561

1351

1993.06.01

00:00

1.5613

1.567

1.546

1.5555

3621

1993.06.02

00:00

1.5544

1.556

1.5364

1.54

2481

1993.06.03

00:00

1.5403

1.5501

1.537

1.5501

1641

1993.06.04

00:00

1.549

1.5505

1.506

1.5085

4851

1993.06.07

00:00

1.5063

1.5268

1.506

1.5268

2111

1993.06.08

00:00

1.5253

1.5265

1.5135

1.5205

2121

1993.06.09

00:00

1.5195

1.527

1.5069

1.5152

3591

1993.06.10

00:00

1.5137

1.5325

1.507

1.5305

3421

1993.06.11

00:00

1.5293

1.5405

1.5185

1.5215

3621

1993.06.14

00:00

1.519

1.533

1.519

1.5285

1851

1993.06.15

00:00

1.5275

1.5355

1.514

1.5165

3201

 

 

 


Using code from ConvetJs Sample

I have highlighted in Yellow the possible variation we can use for project this is just for illustration, the original code untouched as used reference

https://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html

 

var num_inputs = 27; // 9 eyes, each sees 3 numbers (wall, green, red thing proximity)

var num_inputs = 6; // Date,Open High Low Close Volume

var num_actions = 5; // 5 possible angles agent can turn

var num_actions = 3; // (Buy Open Exit Close), ( Sell Open Exit Close), ( Do Nothing)

 

 

 

 

var temporal_window = 1; // amount of temporal memory. 0 = agent lives in-the-moment :)

var temporal_window = 1; // amount of temporal memory. 0 = agent lives in-the-moment :) Not sure how this would integrate with out systems

 

var network_size = num_inputs*temporal_window + num_actions*temporal_window + num_inputs;

var network_size = num_inputs*temporal_window + num_actions*temporal_window + num_inputs;

 

 

// the value function network computes a value of taking any of the possible actions

// given an input state. Here we specify one explicitly the hard way

// but user could also equivalently instead use opt.hidden_layer_sizes = [20,20]

// to just insert simple relu hidden layers.

var layer_defs = [];

layer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:network_size});

layer_defs.push({type:'fc', num_neurons: 50, activation:'relu'});

layer_defs.push({type:'fc', num_neurons: 50, activation:'relu'});

layer_defs.push({type:'regression', num_neurons:num_actions});

 

// options for the Temporal Difference learner that trains the above net

// by backpropping the temporal difference learning rule.

var tdtrainer_options = {learning_rate:0.001, momentum:0.0, batch_size:64, l2_decay:0.01};

 

var opt = {};

opt.temporal_window = temporal_window;

opt.experience_size = 30000;

opt.start_learn_threshold = 1000;

opt.gamma = 0.7;

opt.learning_steps_total = 200000;

opt.learning_steps_burnin = 3000;

opt.epsilon_min = 0.05;

opt.epsilon_test_time = 0.05;

opt.layer_defs = layer_defs;

opt.tdtrainer_options = tdtrainer_options;

 

var brain = new deepqlearn.Brain(num_inputs, num_actions, opt); // woohoo

  

 

It's very simple to use deeqlearn.Brain: Initialize your network:

   var brain = new deepqlearn.Brain(num_inputs, num_actions);

  

And to train it proceed in loops as follows:

   var action = brain.forward(array_with_num_inputs_numbers);

   // action is a number in [0, num_actions) telling index of the action the agent chooses

   // here, apply the action on environment and observe some reward. Finally, communicate it:

   brain.backward(reward); // <-- learning magic happens here

 

   brain.backward(reward); // <-- learning magic happens here maximum pips overtime

That's it! Let the agent learn over time (it will take  opt.learning_steps_total), and it will only get better and better at accumulating reward as it learns. Note that the agent will still take random actions with probability opt.epsilon_min even once it's fully trained. To completely disable this randomness, or change it, you can disable the learning and set epsilon_test_time to 0:

   brain.epsilon_test_time = 0.0; // don't make any random choices, ever

   brain.learning = false;

   var action = brain.forward(array_with_num_inputs_numbers); // get optimal action from learned policy

  

 

 

Thanks

Peter


Sample_GBPUSD.docx
Design Forex Java.pdf

Peter Henry

unread,
Dec 29, 2015, 5:53:12 AM12/29/15
to convnetjs
Hello Group

I've attempted to get the Deep Q demo working on Forex data, results interesting 


If you would like to contribute source code available  at https://github.com/AIForex/AIForex_NN



Peter Henry





See

Peter Henry

unread,
Dec 31, 2015, 2:59:45 AM12/31/15
to convnetjs
Hi Group

I have uploaded a demo here :-


Any question contact peter...@gmail.com

Peter

Henry E

unread,
Feb 1, 2016, 9:46:33 AM2/1/16
to convnetjs
Hi,

I was checking out the site, http://ai.marketcheck.co.uk/Forex . I clicked on start but it short circuits and stops working after only a couple of seconds.

How are you finding the deep q algorithm? Did it work at all as expected.

Cheers,

Henry
...

Peter Henry

unread,
Feb 1, 2016, 12:21:01 PM2/1/16
to Henry E, convnetjs
Hello Henry

The Algorithm is working, It needs to see the data many times before it learns

I will update the site with my findings

For programmers that is interested in this project here is the update to be used in conjunction with ai.marketcheck.co.uk/Forex




var num_inputs = (6 * 5); // Date,Open High Low Close Volume
var num_actions = 3; // (Buy Open Exit Close), ( Sell Open Exit Close), ( Do Nothing)
//var actions = ['SELL'];
//var actions = ['BUY','NOTHING'];
//var actions = ['BUY', 'SELL'];
//var actions = ['SELL','NOTHING'];
var actions = ['BUY','SELL','NOTHING'];
var temporal_window = 1; // amount of temporal memory. 0 = agent lives in-the-moment :)
var network_size = num_inputs * temporal_window + num_actions * temporal_window + num_inputs;
// the value function network computes a value of taking any of the possible actions
// given an input state. Here we specify one explicitly the hard way
// but user could also equivalently instead use opt.hidden_layer_sizes = [20,20]
// to just insert simple relu hidden layers.
var layer_defs = [];
layer_defs.push({ type: 'input', out_sx: 1, out_sy: 1, out_depth: network_size });
layer_defs.push({ type: 'fc', num_neurons: 50, activation: 'relu' });
layer_defs.push({ type: 'fc', num_neurons: 50, activation: 'relu' });
layer_defs.push({ type: 'regression', num_neurons: num_actions });

// options for the Temporal Difference learner that trains the above net
// by backpropping the temporal difference learning rule.

var tdtrainer_options = { learning_rate: 0.001, momentum: 0.0, batch_size: 64, l2_decay: 0.01 };
var opt = {};
opt.temporal_window = temporal_window;
opt.experience_size = 300000;
opt.start_learn_threshold = 100000;
opt.gamma = 0.7;
opt.learning_steps_total = 20000;
opt.learning_steps_burnin = 300;
opt.epsilon_min = 0.05;
opt.epsilon_test_time = 0.05;
opt.layer_defs = layer_defs;
opt.tdtrainer_options = tdtrainer_options;
var brain = null;
var numberOfTrades = 0;
var numberOfActualTrades = 0;
var totalRewards = 0;
var positiveTrades = 0;
var inputData = null
var averageReward = 0;


function Start() {
    $('#TotalPipsGained').html('');
    $('#AverageProfitPerTrade').html('');
    $('#PercentAgeCorrect').html('');
    numberOfTrades = 0;
    numberOfActualTrades = 0;
    totalRewards = 0;
    positiveTrades = 0;
    inputData = $.csv.toArrays($('#inputData').val());
    batchSize = parseInt($('#txtNoOfBars').val());
    brain = new deepqlearn.Brain(num_inputs, num_actions, opt);
    brain.learning = true;
    brain.epsilon_test_time = 1; // don't make any random choices, ever

    window.setTimeout(ProcessData(), 1);
}



function ProcessData() {
    if ((numberOfTrades < inputData.length) && (batchSize + numberOfTrades <= inputData.length)) {
         
            var arry = [];
            for (var j = numberOfTrades; j < inputData.length && j < numberOfTrades + batchSize ; j++) {
                arry.push(inputData[j]);
            }
var averageReward = 0;
            var reward = 0;
            var action = brain.forward(arry); // returns index of chosen action
            //if (actions[action] == 'BUY') {
            //    for (var i = 0; i < arry.length; i++) {
            //        reward += arry[i][5] - arry[i][2];
            //    }
            //}
            //else if (actions[action] == 'SELL') {
            //    for (var i = 0; i < arry.length; i++) {
            //        reward += arry[i][2] - arry[i][5];
            //    }
            //}
            if (j < inputData.length) {
                if (actions[action] == 'BUY') {
                    numberOfActualTrades += 1;
                    reward += parseFloat(inputData[j][5] - inputData[j][2]);
                }
                else
                    if (actions[action] == 'SELL') {
                        numberOfActualTrades += 1;
                        reward += parseFloat(inputData[j][2] - inputData[j][5]);

                    }
            }
            if (reward > 0)
                positiveTrades += 1;
            totalRewards += reward;
            averageReward == (totalRewards / numberOfActualTrades);
            brain.backward(reward);
            inputData[numberOfTrades] = null;
            numberOfTrades += 1;

            $('#TotalNoOfTrades').html(numberOfActualTrades);
            $('#TotalPipsGained').html((totalRewards) * 10000);
            $('#AverageProfitPerTrade').html((totalRewards / numberOfActualTrades) * 10000);
            $('#PercentAgeCorrect').html((positiveTrades / numberOfActualTrades) * 100);
    }
    else
    {
        numberOfTrades = 0;
        inputData = $.csv.toArrays($('#inputData').val());
        
    }
    window.setTimeout(ProcessData, .01);
}



Best
Peter

--
You received this message because you are subscribed to the Google Groups "convnetjs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to convnetjs+...@googlegroups.com.

Peter Henry

unread,
Feb 1, 2016, 12:45:31 PM2/1/16
to Peter Henry, Dan Bikle, Henry E, convnetjs
Hi 

Just to inform you I have updated site pay attention to average profit per trade and percentage correct

Its take time to train but will see improvement 



Peter

--
You received this message because you are subscribed to a topic in the Google Groups "convnetjs" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/convnetjs/QMbTOdhsnRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to convnetjs+...@googlegroups.com.

Huitoert

unread,
Feb 16, 2016, 2:24:14 PM2/16/16
to convnetjs, kar...@gmail.com, bikl...@gmail.com, c.henr...@gmail.com
Hi Peter,
Do you also have experiment results where each day has been observed only once?

Thanks for sharing!

Am Montag, 1. Februar 2016 18:45:31 UTC+1 schrieb Peter Henry:
> Hi 
>
>
> Just to inform you I have updated site pay attention to average profit per trade and percentage correct
>
>
> Its take time to train but will see improvement 
>
>
> ai.marketcheck.co.uk/Forex
>
>
>
>
> Peter
>
>
> On Mon, Feb 1, 2016 at 5:20 PM, Peter Henry <kar...@gmail.com> wrote:
>
> Hello Henry
>
>
> The Algorithm is working, It needs to see the data many times before it learns
>
>
> I will update the site with my findings
>
>
> For programmers that is interested in this project here is the update to be used in conjunction with ai.marketcheck.co.uk/Forex
>
>
>
>
>
>
>
>
>
> var num_inputs = (6 * 5); // Date,Open High Low Close Volumevar num_actions = 3; // (Buy Open Exit Close), ( Sell Open Exit Close), ( Do Nothing)
> ...

ywe...@whitebaytech.com

unread,
Mar 21, 2016, 9:07:27 AM3/21/16
to convnetjs, c.henr...@gmail.com
Hi,
I'm confused. In the function "ProcessData" you feed the model with all the values in "inputData": arry.push(inputData[j]);
However, the reward is highly dependent on this input: reward += parseFloat(inputData[j][5] - inputData[j][2]);
Am I missing something?


On Monday, February 1, 2016 at 7:21:01 PM UTC+2, Peter Henry wrote:
> Hello Henry
>
>
> The Algorithm is working, It needs to see the data many times before it learns
>
>
> I will update the site with my findings
>
>
> For programmers that is interested in this project here is the update to be used in conjunction with ai.marketcheck.co.uk/Forex
>
>
>
>
>
>
>
>
>
> var num_inputs = (6 * 5); // Date,Open High Low Close Volumevar num_actions = 3; // (Buy Open Exit Close), ( Sell Open Exit Close), ( Do Nothing)
> ...

Peter Henry

unread,
Mar 21, 2016, 12:55:05 PM3/21/16
to ywe...@whitebaytech.com, convnetjs, Henry E
Hi

This is correct, the reward is based upon, the greatest number of points over time,  Points is calculated from the open to close 

The action is 1) buy open exit close 2) sell open exit close or 3) Do nothing, 

Hope its clear 



Peter 






--
You received this message because you are subscribed to a topic in the Google Groups "convnetjs" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/convnetjs/QMbTOdhsnRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to convnetjs+...@googlegroups.com.

ywe

unread,
Mar 22, 2016, 4:29:57 AM3/22/16
to convnetjs, ywe...@whitebaytech.com, c.henr...@gmail.com
But if i'm not mistaken, this means in turn, that it's not doable in real life: if you make the decision at the start of the bar/time slot then you can't have the high/low in the input; if you make the decision at the end of the time slot, then you can't realize the reward from that slot. I may be missing something. Can you explain along those lines? Thanks

Peter Henry

unread,
Mar 22, 2016, 4:40:07 AM3/22/16
to ywe, Henry E, convnetjs

Hi

The calculations is taken for the previous bar data for next bar action

On the next open algo would either Buy open exit close, sell open, exit close or do nothing

Hope make sense

Peter

On 22 Mar 2016 08:29, "ywe" <ywe...@whitebaytech.com> wrote:
But if i'm not mistaken, this means in turn, that it's not doable in real life: if you make the decision at the start of the bar/time slot then you can't have the high/low in the input; if you make the decision at the end of the time slot, then you can't realize the reward from that slot. I may be missing something. Can you explain along those lines? Thanks

--

You received this message because you are subscribed to the Google Groups "convnetjs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to convnetjs+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages