It a good question. What I've been developing is a solution to the problem of expressing a proper trading system inside one .cs file. Which is how NT and the other platforms are asking you to do it. Yes they're calling out to other logic in separate indicators and things of that nature but a few years ago I was heading towards a single .cs strategy file which was thousand and thousands of lines long. I don't believe the developers of NT really envisioned someone using it in that way... So to answer your question. What I've developed is a modular way to put together a trading strategy using changeable parts such as IEntryAlgorithm, IStoplossAlgorithm, IProfitTargetAlgorithm, ITrailingStopAlgorithm, ITradeProfile and the list goes on... these shouldn't really be living inside just one .cs file inside a single class. This is just for a Strategy which lives inside an AlphaModel service. The job of this service is simply to receive well formed market data (tick or time based bars) and run the history over the strategies which are instantiated inside it. Trading signals are produced which are then send to the Portfolio service for position sizing and splitting into a set of atomic orders. This order packet is then sent to a Risk service (which is keeping track of all positions, total market exposure, margin status etc) so right here this is something NT can't do so well (act as a risk manager for a fleet of strategies running across say 20 different instruments). Trades approved by the risk manager are then sent back to the Portfolio as such and then sent onto the Execution service for routing to the Broker -> There's more to it but this is in fact the basic model of how a trading platform runs at a hedge fund. Its definitely different to a retail charting package and trading platform such as NinjaTrader which is fine for 10 SMA > 20 SMA BUY type stuff. To expand on the above I've really only described part of the infrastructure to a BlackBox. Data in -> orders/order modifications out. There's more to it in the way of data aggregation/cleansing/storage, the back testing environment, the applications which can plug into these services for visualization or whatever you can conceive of really. So essentially more and more of the full technology stack for a hedge fund or pro algo trading environment. Utilizing as pluggable components those parts which I haven't fully implemented yet (a charting application for example, or a full GUI which can view the models and portfolio as they run). I have a vision of developing trading systems based on Computational Intelligence. So a blend of Fuzzy Logic, Neural Networks and Genetic Algorithms. I do like the philosophy of simple design (which I utilize for everything I can actually), and I know simple trading strategies based on 1-2 indicators and expressed in a script or .cs file can work. But I'm taking this approach which calls for more than what a retail platform can provide to me. I hope I was able to express this clearly enough in this relatively brief message.
Don't see any issue to connect directly from NT to FXCM and code your strategies without any GitHub projects. I think I miss something important - the core reason to create all necessary projects. Can you explain more please? udp Sorry missed your answer before typing the text. Please ignore it for a while.
Hello cjdsellers, Excellent initiation, looks you have a very good big project: 1- Is it a good idea to use 3D charting (SciChart) or 2D charting library is enough? 2- As such project needs a lot of collaborations from all, I feel that there are some professional people discussing : InvBox gkishot Zzzz1 tommcginnis smcoder douglas_prada birzos Kevin Schmit fan27 smcoder birzos JoseDev 3- I have no idea about coding, so how can I participate in this project ? (I suggest to assign the tasks). 4- Timing is very important, if you spent 3 years to get the project done, it might be outdated project.
No coding experience but willing to assign the tasks? If that is the case I think we can all breathe easy and relax with confidence that this project will finally be able to get off the ground
@fan27 you are very correct and yes it is true I have no coding experience. I'd like to collaborate with what I can do such as: Getting my assigned task done by asking a coding company to get done and pay for it. Buying the latest version of a charting software to use it as library. Again You are very correct.
Hi Kharidy, Thanks for your post. SciChart is indeed a great charting package and I own a license myself, this was when I anticipated creating an entire GUI charting application from scratch - a process which isn't so hard, as long and laborious. I'll probably find a way to utilize it in the future. As for the coding, I really recommend you dive in and explore! Anything which you don't know about or have no experience with can seem daunting at first, but part of the joy here is in the journey - as you slowly unravel the mysteries of the currently strange words and symbols. I recommend buying a couple of books, getting onto some of the great https://www.pluralsight.com/ courses. My other favorite way is just to download existing code from open source projects and play around with it - understand how it works. I recommend C#, also look at Python. Good luck
I'd like to share something interesting that is not mine but is worth to look at: ( original site: https://quant.stackexchange.com/que...ancial-data-time-series-database-from-scratch" " I am going to recommend something that I have no doubt will get people completely up in arms and probably get people to attack me. It happened in the past and I lost many points on StackOverflow as people downvoted my answer. I certainly hope people are more open minded in the quant forum. Note - It seems that this suggestion has created some strong disagreement again. Before you read this I would like to point out that this suggestion is for a "Small buy side firm" and not a massive multiuser system. I spent 7 years managing a high-frequency trading operation and our primary focus was building systems just like this. We spent a huge amount of time trying to figure out the most efficient way to store, retrieve and analyze order level data from both the NYSE, NASDAQ and a wide variety of ECNs. What I am giving you is the result of that work. Our answer was Don't Use a Database. A basic structured file system of serialized data chunks works far better. Market time series data is unique in many ways, both in how it is used and how it is stored. Databases were developed for wildly different needs and actually hurt the performance of what you are trying to do. This is in the context of a small to mid-sized trading operation that is focused on data analysis related to trading strategies or risk analytics. If you are creating a solution for a large brokerage, bank or have to meet the needs of a large number of simultaneous clients then I imagine that your solution would differ from mine. I happen to love databases. I am using MongoDB right now for part of a new project allowing us to analyze options trades, but my market timeseries data, including 16 years of options data, is all built into a structured file store. Let me explain the reasoning behind this and why it is more performant. First, let's look at storing the data. Databases are designed to allow a system to do a wide variety of things with data. The basic CRUD functions; Create, Read, Update and Delete. To do these things effectively and safely, many checks and safety mechanisms must be implemented. Before you read data the database needs to be sure the data isn't being modified, it is checking for colisions, etc.. When you do read the data in a database the server puts a lot of effort into caching that data and determining if it can be served up faster later. There are indexing operations and replicating data to prepare it to be viewed in different ways. Database designers have put huge amounts of effort into designing these functions to be fast, but they all take processing time and if they are not used they are just an impediment. Market time series data is stored in a completely different way. In fact, I would say it is prepared rather than stored. Each data item only needs to be written once and after that never needs to be modified or changed. Data items can be written sequentially, there is no need to insert anything in the middle. It needs no ACID functionality at all. They have little to no references out to any other data. The time series is effectively its own thing. As a database does all the magic that makes databases wonderful it also packs on the bytes. The minimum space data can take up is its own original size. They may be able to play some tricks with normalizing data and compression, but those only go so far and slow things down. The indexing, caching and referencing the data ends up packing on the bytes and chewing up storage. Reading is also very simplified. Finding data is as simple as time & symbol. Complex indexing does it no good. Since time series data is typically read in a linear fashion and a sequential chunk at once, Caching strategies actually slow the access down instead of help. It takes the processor cycles to cache the data you aren't going to read again anytime soon. This is the basic structures that worked for us. We created basic data structures for serializing the data. If your major concern is speed and data size you can go with a simple custom binary storage. In another answer, omencat suggested using TeaFiles and that looks like it has some promise also. Our recent need is for more flexibility so we chose to use a fairly dense, but flexible JSON format. We broke the data up into fairly obvious chunks. The EOD stock data is a very easy example, but the concept works for our larger datasets also. We use the data for analysis in fairly traditional time series scenarios. It could be referenced as one quote or out to a series containing years of data at a time. It was important to break the data down to bite-sized chunks for storage so we chose to make one "Block" of our data equal one year of EOD stock time series data. Each block is one file that contains a year of OHLC EOD data serialized as JSON. The name of the file is the Stock symbol prefixed by an underscore. Note - the underscore prevents issues when the stock symbol conflicts with DOS commands such as COM or PRN. Note, make sure you understand the limitations of your file system. We got in trouble when we put too many files in one place. This led to a directory structure that is effectively its own index. It is broken down by the year of data and then also sorted by the first letter of the stock symbol. This gives us roughly 20 to a few hundred symbol files per directory. It looks roughly like this; \StockEOD\{YYYY}\{Initial}\_symbol.json AAPL data for 2015 would be \StockEOD\2015\A\_AAPL.json A small piece of its data file looks like this; [{"dt":"2007-01-03T00:00:00","o":86.28,"h":86.58,"l":81.9,"c":83.8,"v":43674760}, {"dt":"2007-01-04T00:00:00","o":84.17,"h":85.95,"l":83.82,"c":85.66,"v":29854074}, {"dt":"2007-01-05T00:00:00","o":85.84,"h":86.2,"l":84.4,"c":85.05,"v":29631186}, {"dt":"2007-01-08T00:00:00","o":85.98,"h":86.53,"l":85.28,"c":85.47,"v":28269652} We have a router object that can give us a list of filenames for any data request in just a handful of lines. Each file is read with an Async filestream and deserialized. Each quote is turned into an Object and added to a sorted list in the system. At that point, we can do a very quick query to trim off the unneeded data. The data is now in memory and can be used in almost any way needed. If the query size gets too big for the computer to handle it isn't difficult chunking the process. It takes a massive request to get there. I have had programmers who I described this to almost go into a rage telling me how I was doing it wrong. That this was "Rolling my own database" and a complete waste of time. In fact, we switched from a fairly sophisticated database. When we did our codebase to handle this dropped to a small handful of classes and less than 1/4 of the code we used to manage the database solution. We also got nearly a 100x jump in speed. I can retrieve 7 years of stock end of day data for 20 symbols in a couple of milliseconds. Our old HF trading system used similar concepts but in a highly optimized Linux environment and operated in the nanosecond range."
Fantastic post! Seems like a lot of people get stuck on theory and an idea they have of how something "should" be done without stepping back and asking themselves what problem they really are trying to solve.
Hi CJ, I have read your post about collaboration and c# and would like to arrange a time to skype if you are interested. Firstly, a little about my background; I have over 20 years in IT and have significant solution architecture and coding experience having founded my own software development company 6 years ago which makes CRM solutions. I have recently completed a rather epic undertaking; the creation of a Azure based cloud service consisting of 4 worker roles and dashboard web app which is currently capable of trading 47 instruments on 2 brokers (kraken and oanda). The platform can actually be very easily described in the following way. Take Profit Targets: I use Long Short Term Memory Recurrent Neural Network PER INSTRUMENT PERSISTENT (state-full) models - (using Microsoft Cognitive Toolkit Framework) - outputting predictions time-shifted -8 - 16 & -24 hours, 24 hours a day, 365 days a year based on continuous updated data. Stop Loss Targets: all stop losses are 2:1 profit / loss (half distance between actual and predicted) Entry Conditions: (precondition = margin < 25% and market open etc) Signal 1 = Price + spread above or below prediction - ( if above buy if below sell. ) Signal 2 = Instrument is in top or bottom 61.8 or 38.2 % 3, 9, or 27 day range. (Dynamically calculated term period overlay Fibonacci Term Period Levels) this brings 2 factors to the table. A) if prediction was in bottom 38.2% - we would not want to sell unless the current price was in the 61.8% or above range. (range = price fluctuation over time). this gives you 'statistically' improved and controlled 'trade entry ranges'. hence adding an edge. You can use chart indicator 'fibonacci arc' or Fibonacci range retracement tool on most charting software to get an idea of what its doing. difference is its doing this calculation every 5 minutes and updating its term min max ranger period percentages. Signal 3 = Custom Bollinger Band Trend Signal for 8 / 16 / 24 hour time periods picking up trend toward prediction. - Bollinger well known chart indicator. I just use 3 of them overlayed using special values I developed to create a trend indicator. Signal 4 = The last 2 Rate of Change Values for 8/ 16 / 24 Hour timer periods must be positive/negative match - again regular indicator you can add to chart. it provides additional 'confirmation' of trend toward prediction. Signal 5 = Ichimoko Cloud Trend Indicator. Red Above Blue = short blue above red = long match - regular (complex) trend indicator - again used to confirm market is moving toward prediction. Trade Management Rules: A) If Price has moved more than 38.2% toward predicted, update order with trailing stop = purchase price B) If Price has moved more than 61.8% toward predicted price AND margin use is greater than 25%, then close the trade. Volume Sizing: Currently something I am working on. I have some advanced pip value / exchange rate function now in testing however currently used minimum fixed trade amounts per instrument multiplied by (if over 1) then percent difference between the current actual and predicted. (this gives 'a form' of weighting which I still think is a good idea to keep during implementation of the proper pip value calculated volumes. Whilst obviously I am a little biased, having created this myself, I do believe it is one of the most state of the art 'personal proprietary' platform 'architectures' around, but it has take me a huge amount of time and effort and money to be honest, which I generally dont have available. I can say that it is trading positively now, and making good profit - however there is always room for improvement. I totally understand where you are coming from with all the other platforms/libraries. One reason I have the system I do now, is because I has some 'temporary success' with MT4 EA's like trio dancer and Forex Benz many years ago. Of course I quickly learned their limitations and why they would blow accounts, but at the time didn't have the necessary coding skills or knowledge to embark on my own platform. What I did take away from that experience was the 'massive opportunity to earn' that existed - I just had to find a way to take full advantage of it in a 'reliable fashion'. To this end I have sourced (the best predictive neural network model/infrastructure and tweaked it), created all the necessary back end data store/retrieval services etc, and then embarked on applying traditional market analysis indicator rules and typical trade rule layers on top. I firmly believe in an iterative approach to development, and first recognizing underlying market principles/methods - and then finding a way to incorporate and encode these in a systematic way toward building a 'comprehensive automated trading model'. Also I'm sure you understand the limitations of working with web based platforms like ninjatrader or quantconnect etc even if the do offer some c# or f# no. I am interested to learn about the current state of your platform and if you are interested in collaborating in a serious capacity, we can then look at merging relevant value add code base features toward this end. Whilst I have come a long way alone on this, I cant devote any of my own dev resource to this as he has another product to maintain - and so I need to make a C# trading friend so to speak. 2 brains are definitely better than one when it comes to this. The C# / platform is the easy part! For example it took me 2 weeks to fully understand different unit size calculation methods and resource snippets and only 60 minutes to cut the code. I am looking for someone who has a real passion and dedication to realizing life changing profits using all the knowledge and resources at their disposal, be that time/skill/money etc to hopefully work with me to further refine and maintain the current trajectory. Current goals include testing/implementation of pipvalue/risk/pipstep unit size ( TradeUnitSize = int.Parse(((Convert.ToDouble(accountSummary.balance) * MaximumStopLossAccountRiskPercent / 100) / (StopLossInPips * InstrumentPipValue)).ToString()); ) I want to add instrument leverage to this calculation. (somehow) + some additional dashboard refinements for live trading summary analysis + further debugging where less than 1% of applied trailing stops get rejected. Once volume is perfect for forex/commodities - review of crypto volumes (currently using 'crude' way to determine purchase volume) instead of instrument class like in oanda. Once get to this level, extend to as many instruments can afford infrastructure for (potentially 100 +) to further increase profits, whilst dropping any isolated non performing instruments along the way. https://tds-trading-dashboardwebapp.azurewebsites.net - predictions https://tds-trading-dashboardwebapp.azurewebsites.net/Trading_Signals/Index - signals I also live in Australia which would make collaboration easier. my skype is owenrosswilliams and my email is owen dot ross dot williams at gsnail dot com I really hope to hear from you if you feel this is something you would like to become involved in. KR - Owen