Anyone coding in Assembler?

Discussion in 'App Development' started by braincell, Feb 28, 2012.

  1. byteme

    byteme

    No, it IS 1.5 cents p/h for 256MB RAM Linux instances.

    12 cents p/h would actually get you 2GB RAM instances; if you actually needed that for each of the 1000 instances.

    And no, they aren't single core Pentium II CPUs. I assume that was a joke.
     
    #31     Mar 13, 2012
  2. Eight

    Eight

    Assemblers have been able to optimize to 99.5% what a human can do, at least with regard to lines of code, for a long time.. I doubt that there is much hand optimization of assembly language stuff going on.. I did work on a pacemaker project wherein the memory was extremely limited, something like 4k of ram! Pacemakers have to be very low power and we started with that much ram. We hired some old guy that had been hand optimizing code for memory usage since the 1950's [he started with Univac!] and he squeezed every last bit, I mean, he found ways to make a single bit serve a dual purpose..
     
    #32     Mar 13, 2012
  3. bln

    bln

    Today's C compilers generate very efficient code and can do auto vectorization so there is not much point of hand writing whole programs in Assembler anymore, except for optimizing hotspots, low level hardware/driver stuff.

    New vector instructions like AVX/FMA/XOP and extended vector registers in the latest x86 processors can be accessed trough intrinsics from C/C++ so you do not even have to inline these in assembler if you do not want to.
     
    #33     Mar 13, 2012
  4. I agreed with you regarding the fact that there is probably little practical use of Assembler in trading/research applications for non-retail, but now you're touching on a different subject. I see why you have a slightly negative view towards machine learning, but i'll ask you to consider another side of it. I may not have as much live trading experience, but i know i excel at programming, math, understanding and creating complex systems. I'm also aware of the arrogance you mention, but that's not it either. I've been developing machine learning software for less than a year, but i've been coding my platform for a bit longer. The strategies developed with genetic programming (GP) thus far have performed mostly as expected during live paper trading, and also the few that i tested in real live conditions. This may not be significant enough for me to make a claim that "it works", but there are others that have been in the trading industry for 30+ years that have developed and/or used machine learning on the markets for a great number of years with consistent results. I can send you the names via PM if you're interested, but those guys really know what they're doing, and i'm sure most of them have equally as much or (more likely) more experience than you and i. Numbers of papers have been written on the subject and such systems have been independently monitored. GP has also been used with great success on other time series problems in chemistry, physics, robotics, voice recognition software, math, etc.

    The main point which i find that comes accross with people of similar backgrounds to yours is this: humans can identify ideas describing market dynamics into a collection of ideas, but the likelihood that those ideas are curvefits is the same as the likelihood that patterns found via machine learning are curvefits. This is simply because time flows forward and is uncertain, and the patterns found by machines are only re-formulations of the human ideas. For example, with GP, given a general description of a human idea for market dynamic, parameters can be introduced that will make the search algo find and describe a system based on such an idea, or disprove it (and also do it much faster than a human building systems manually would). The code and logic produced are perfectly readable, unlike for example neural networks and older search algos. The only thing a search algo does is it transforms the idea into seemingly abstract input/output values that a human can easily misunderstand if he has no experience working with it. However, market data is digital data of numbers, perfectly suited for a computer to analyze and deduce, and eventually, it must come across a description of a solution that will fit the idea of the human that has given it the task (and usually much sooner than you expect). Maybe you aren't quite familiar with GP and the fact that it produces readable code, and also the fact that with carefully selected constraints (fitness functions, strategy limitations, robust backtesting assumptions) you can limit the search to focus on trying out your human idea. This is all that search algos really are in essence: a tool to allow for easier testing of human ideas, so the creativity part you talk about isn't really lost, just transformed. Also, it helps if the "head" as you put it, understands the profile of the portfolio and what risk tolerance will be needed for each of the strategies making up such a portfolio. I find that with a significant number of strategies in a portfolio (say 30), the compounded risk/reward becomes quite bearable and acceptable to even the most risk averse managers. This comes from a virtual portfolio i am creating for myself. Again, i'm not saying this just from reading stuff and theorizing, there are experienced traders and hedge funds out there that are and have been using this approach for a long time. Using machine learning properly isn't straight forward or simple by any means, and it often gets a bad reputation because of the people who tried using it incorrectly and reported on it not working as expected.

    If you haven't yet seen a machine-created strategy which is profitable, i can also PM you some that are and have been for the past few years (independent monitoring).

    Finally, thanks for the advice, but i'm planning on searching for a job as a programmer only if i blow all of my accounts, which i doubt will happen soon given my trading track record (however limited), and hopefully never. Even with my doubts of failure, all i can say is - we'll wait and see.



     
    #34     Mar 14, 2012
  5. Oh... ok, well that might be interesting. However i need at least 1024MB ram per instance. I calculated that a single core of Core2Duo with 1GB RAM would be ok if it costs 5 cents per hour. If they can match that, i will use it. Right now though, i'm fine with the rack i have at home and my elitistically fast assembler backtesting app. ;)
     
    #35     Mar 14, 2012
  6. ssrrkk

    ssrrkk

    There are two types of theories that the human brain can come up with: one might be termed phenomenology, the other might be called fundamental. Phenomenological theories simply describe what is observed in the data without invoking "hidden variables and relationships". It often has more parameters than a fundamental theory but it often fails to explain the data in all regimes. Fundamental theories are often simpler, but can explain more -- so much so that simpler theories that can explain more are considered more likely to be true just from those facts alone (Occam's razor). Often the observed data comprises an "emergent" dynamics that is not obviously connected to the fundamental theory. In other words, there is often no deductive or inductive route from the data to a fundamental theory, whereas there is always one from the data to a phenomenological one. I would contend that supervised learning algorithms can only find phenomenological theories, aka fits, and cannot find fundamental ones.
     
    #36     Mar 14, 2012
  7. I mostly agree with that, but i cling onto a belief that the fundamentals might be found in forms that we don't recognize. Derivation of data can be done to such an extent that it's almost impossible to theoretically find clear differences that will mark it as close approximations of fundamentals or not. Perhaps a study could be done, but it would require a vast amount of resources, so it's probably going to remain an unanswered question for a very long time, upon which we can speculate. You said "often" and not that there is never a deductive or inductive route, so that little window of theoretical opportunity is there. The bottom line is, many of the relationships describing market dynamics (and strategies) found with machine learning hold up in live markets and have about the same success rates as the manually built ones. That's good enough for me.

    I've built a few strategies manually some time ago too, but there is another reason i like data mining and would rather spend time dealing with computers than theorizing about humans.
    This song by Kraftwerk explains the reason : http://www.youtube.com/watch?v=dppczm_TKMA

    :)
     
    #37     Mar 15, 2012
  8. Eight

    Eight

    How does one implement GPGPU coding? Is there an off-the-shelf solution that turns GPU's into a math coprocessor? I ran across the info that Matlab has GPGPU capabilities, that is one way to access. I'd like to extend Openquant to GPU processing were it doable.
     
    #38     Mar 21, 2012
  9. The little info i gathered on that subject, it seemed to me you have to strike a deal with a GPU manufacturer (for a price) so that they give you access to an assembler/linker/compiler etc, which is their property. Other than that, some decent speed can be achieved with the CG language (OpenGL), but i'm not sure how practical that is. I've asked other programmers in the 3D gaming industry and nobody could tell me a straight answer so far.
     
    #39     Mar 26, 2012
  10. ssrrkk

    ssrrkk

    google CUDA.
     
    #40     Mar 26, 2012