well if you want to share data between threads/processes dont reinvent the wheel http://chronicle.software/ https://ignite.apache.org/
Your approach with 0MQ is absolutely valid to work with multiple processes. Here is the problem. Processes do not share the same address space because the way OS is managing them. Therefore you are forced to serialize and deserialize your data using sockets or other interprocess buffers. Multithreading and modular design allows you to work on the same data because memory address is visible from all modules across the application. Reading the same data from multiple threads concurrently is not a problem in most situations. Trying to read and write is. Accesing vector with iterators is impossible from a different process directly. Even if you would attempt to pass the address between the processes it would not help because each process has different address offsets known only to the OS. Doing the same within the same app from different threads causes some problems too. Anytime vector has changed the iterators are invalidated. Vector is dynamically allocated and as it grows it will occupy different chunks of continuous memory. I am actually using vectors in multithreaded setting if I have to but then: I cannot use iterators but rather I get the size once and iterate with classic counter; before that I reserve enough memory to avoid reallocation. Otherwise I use different container like a list. One can create their own multithreaded version of containers or use available libraries. I know it would be nice to have both ways but the computers and OS were not designed to do that yet.
@vicirek you are right about all of that. I am currently using my custom ThreadsafeQueue class that encapsulates std::queue<class Quote>, mutex, and condition variable. I write to it in one thread and read from it in another. Since it's locked every time I start writing or reading, the iterators are not invalidated. Reddis or Chronicle (@2rosy posted above) are in-memory data structures that can be shared between processes. Those are in-memory databases that provide the interator-like interface. I haven't used those but about to take a look.
Thx for links, Chronicle may work. Ignite will not work as I want to stay away from using servers since it may cause issues if I decide to put my system on a cloud.
If I want to keep an STL deque in the central broker (XPUB/XSUB), how can my strategy module can access it? Say, the deque contains the last 1000 prices of an instrument and I want to calculate the average?
The xpub/xsub is not really meant to maintain any state but rather a single, more or less fixed node in your network that facilitates connections between actual publishers (those producing the data) and subscribers (those consuming it). I'd say this is more suitable for real-time data feed dissemination. Communication is one-way only. To me your requirement seems very strategy specific so I'd personally not do this within a dedicated node, but rather on the strategy side. What I'd have is a historical data service that strategies can connect to and retrieve data per symbol in various formats (ie ticks or bars) with a configurable lookback. If you then really needed a deque that always contains the 1000 most recent prices then I'd probably build my own on top of STL deque that connects to the server on startup, loads most recent data and at the same time subscribes to the real time feed to be up to date (but resides within the strategy client). For the historical data server I'd just use the dealer/router or router/router devices for the server and a dealer socket for the client.
@hoppla I see about your approach. I have to lock my container and provide callbacks... Thank you for getting back.