Implementation of a frozen model in MT4 to predict price patterns

By: Amirali R. Davoudpour

IraPolska Sp. Z. o. o. is a registered in Poznan-Poland as SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ, KRS nr. 0000967349, REGON 52178692200000, NIP 7831855512

Abstract:

Here we discuss several methods to implement TensorFlow and Open Neural Network Exchange (.onnx) frozen models in C++ and MQL4 in order to acquire realtime/offline predictions using input and output I/O functions of a dynamic link library (.DLL)

Introduction:

First we need to install TensorFlow`s C++ API :

Installing the TensorFlow C++ API can be a multi-step process. Here's a general guide on how to install it on various operating systems:

1. Install TensorFlow

Before installing the TensorFlow C++ API, you need to have TensorFlow installed. TensorFlow provides pre-built binaries for different platforms, which include the necessary C++ libraries. You can install TensorFlow using one of the following methods:

  • Using pip (Python package manager): Open a terminal or command prompt and run the following command:
  • Building from source: TensorFlow's official documentation provides detailed instructions on building TensorFlow from source. You can follow the relevant guide for your operating system from the TensorFlow website: https://www.tensorflow.org/install/source

2. Download TensorFlow C++ API

After installing TensorFlow, you need to download the TensorFlow C++ API. The API includes the header files and libraries required for C++ development with TensorFlow.

  • Pre-built binaries: TensorFlow provides pre-built binaries for the C++ API on the TensorFlow website. You can download the appropriate package for your operating system and extract it to a desired location.
  • Build from source: If you prefer to build the C++ API from source, you can do so by following the instructions in the TensorFlow repository on GitHub. The repository includes the necessary source code and build scripts for building the C++ API:
  • https://github.com/tensorflow/tensorflow/tree/main/tensorflow/cc

3. Set up the build environment

To build your C++ application using the TensorFlow C++ API, you need to set up the build environment correctly.

  • Include directories: Configure your C++ project to include the downloaded TensorFlow C++ API headers. This can typically be done by adding the path to the include directory of the TensorFlow C++ API to your build configuration.
  • Link libraries: Link against the TensorFlow C++ API libraries when compiling your application. The exact steps depend on your build system and development environment. You may need to add the path to the lib directory of the TensorFlow C++ API and specify the appropriate linker flags.

4. Write and compile your C++ application

Once the TensorFlow C++ API is set up, you can write your C++ application using the API. Make sure to include the necessary header files and link against the TensorFlow C++ API libraries when compiling your application.

Here's a simple example of a C++ application that uses the TensorFlow C++ API:

#include <tensorflow/core/public/session.h>

#include <tensorflow/core/platform/env.h>

int main() {
// Create a TensorFlow session
tensorflow::Session* session;
tensorflow::NewSession(tensorflow::SessionOptions(), &session);

// Load the TensorFlow model
tensorflow::Status status = session->Create(graphDef);
if (!status.ok()) {
// Handle the error
return 1;
}

// Run inference or perform other TensorFlow operations

// Clean up resources
session->Close();
delete session;

return 0;
}

To convert TensorFlow .pb files to .dll files with export and import functions, you can use TensorFlow's C++ API and the TensorFlow Lite framework. Here's an example code snippet to guide you through the process:

#include <tensorflow/lite/interpreter.h>
#include <tensorflow/lite/kernels/register.h>
#include <tensorflow/lite/model.h>
#include <tensorflow/lite/tools/signature/signature_def_util.h>
#include <tensorflow/lite/optional_debug_tools.h>

#include <fstream>
#include <iostream>

int main() {
    const std::string pbFileName = "path/to/your/model.pb";
    const std::string dllFileName = "path/to/save/your/model.dll";

    // Load the TensorFlow model
    std::unique_ptr<tflite::FlatBufferModel> model =
        tflite::FlatBufferModel::BuildFromFile(pbFileName.c_str());

    // Create an interpreter
    tflite::ops::builtin::BuiltinOpResolver resolver;
    tflite::InterpreterBuilder builder(*model, resolver);
    std::unique_ptr<tflite::Interpreter> interpreter;
    builder(&interpreter);

    // Allocate tensors
    interpreter->AllocateTensors();

    // Get the input and output tensor names
    const std::vector<int> input_indices = interpreter->inputs();
    const std::vector<int> output_indices = interpreter->outputs();
    const std::string input_tensor_name = interpreter->GetInputName(input_indices[0]);
    const std::string output_tensor_name = interpreter->GetOutputName(output_indices[0]);

    // Resize input tensors if necessary
    const TfLiteIntArray* input_dims = interpreter->tensor(input_indices[0])->dims;
    const int input_height = input_dims->data[1];
    const int input_width = input_dims->data[2];
    // Assuming your model's input shape is [1, height, width, channels]
    interpreter->ResizeInputTensor(input_indices[0], {1, input_height, input_width, 3});
    interpreter->AllocateTensors();

    // Run inference (dummy input values)
    float* input_data = interpreter->typed_input_tensor<float>(0);
    // Assuming your model expects float32 input data
    // Provide your input data here or load it from a source
    // e.g., you can use OpenCV or other libraries to read images
    // and convert them to the required format
    // Fill input_data with your input values

    interpreter->Invoke();

    // Get the output tensor data
    float* output_data = interpreter->typed_output_tensor<float>(0);

    // Save the output tensor data to a file (e.g., DLL)
    std::ofstream outfile(dllFileName, std::ios::binary);
    outfile.write(reinterpret_cast<const char*>(output_data), interpreter->tensor(output_indices[0])->bytes);
    outfile.close();

    std::cout << "Model successfully converted to DLL format!" << std::endl;

    return 0;
}

Note: Make sure to link against the TensorFlow Lite library (-ltensorflowlite) and include the necessary header files for compilation.

Using Python connector to read .onnx:

To use this PythonConnector in your MQL4 code, you can follow these steps:

  1. Save the PythonConnector.mqh file in the Include directory of your MetaTrader 4 terminal installation.
  2. In your MQL4 code, include the PythonConnector.mqh file using the #include directive:

#include <PythonConnector.mqh>

  1. Create an instance of the PythonConnector class and specify the Python module name and function name to call:
  2. PythonConnector python("my_module", "my_function");
  3. Use the CallPythonFunction() method of the PythonConnector object to call the Python function and retrieve the result:
  4. int result = python.CallPythonFunction(123);
  5. Replace 123 with the parameter value to pass to the Python function. The result variable will store the integer value returned by the Python function.

Please note that this is a basic example to demonstrate the concept of creating a Python connector for MQL4. You'll need to make sure the Python module and function you are calling exist and are implemented correctly in your Python code.

Also, make sure you have the necessary Python environment set up and the Python DLL available for your MQL4 application to load.

Leave a Reply

Your email address will not be published. Required fields are marked *