I would have great confidence in my shitty programming skills to make something just as clunky in C++. Actually, I would be confident in building something that is not as fast as numpy for calculating basically anything. The best I think I could do in C++ would be a terminal application that would have a good chance of calculating something completely wrong.
I would suspect anyone who is not part of a team creating fully algorithmic systems in 2022 is utterly full of shit. This thread is about semi-automated systems and python. You are in the wrong thread son.
@nrstrader, I think there is some misunderstanding caused by your previous posting as not everybody understood what you said since your writing was not easy to understand, IMO. But you are right when you say "This thread is about semi-automated systems and python."
Fair, that was even in the thread title. And if you are not in a team and want to run fully algorithmic then there are not a whole lot of other choices. I have not found one singe off the shelf platform that can do what I need it to do. But I would never ever choose python for such task. An OOP language is an absolute must.
Yea sorry, I was trying to be cute in some sense but there is so much nonsense in this thread about C++. No one is writing python in a text editor and then running it directly in the interpreter with no libraries for anything of consequence. The whole point of not writing everything in assembly is because of the power of abstraction. So much of what anyone is doing with python is using numpy and it is hardly trivial to beat numpy with your own C++ https://stackoverflow.com/questions/65888569/how-is-numpy-so-fast There is a reason python is completely ubiquitous in the world of ML and data science. If ML was all C++, support vector machines would probably still be state of the art. The best example though is I have some stuff that is quite slow on my 4 core i7 machine that uses sklearn. The solution though wasn't to pretend I am going to be able to recreate sklearn in Fortran. The solution was just to scale out more cores from Saturn Cloud and it is super fast with no effort because sklearn is already parallelized. Not to mention example code. There is probably 100X more example code for algorithmic trading with python than anything. You don't even have to bring in ML example code to make it 1000X. Look up algorithmic trading python on Amazon and then google the book title with Git at the end, you pretty much will get a ton of example code. Example: Algorithmic Trading with Python: Quantitative Methods and Strategy Development from 2020. Then google that plus Git https://github.com/chrisconlan/algorithmic-trading-with-python If you have to be thinking about language speed as if you are a latency arbitrage hedge fund then to me you already have lost unless the goal is to build your own latency arbitrage fund/business. In many cases, I would imagine the difference between python and language X is not as much as the latency between you and the exchange if not co-located. By some of the nonsense logic in this thread a person would think you don't have a true algorithmic trading system until the whole system has been written in hand optimized assembly language.
It's a relatively big investment regarding time, expertise, money etc. This is like discussing why to travel by car from LA to NY when there are alternatives available like plane and train... Answer: not everyone has the same requirements regarding time, costs, etc.