That is true, but doesn't really illuminate. The right way to look at these things is from the functional point of view, and the right way to view functional programming is from the metaphor of the story of the Tower of Babel from the bible (no, I am not crazy). The key idea is that you need to normalize different things to a common basis before presenting it to a NN so that it can generalize. The idea of Types, and the functional idea (that comes from mathematics) of Lifting Types. Then, you also have to see different NN topologies from the point of view of Functional Programs. This is more abstract than what is presented in watered down explanations, but once you understand the correct abstraction, everything is truly simple. This is the best article I have ever seen on the subject. Deep Learning & Functional Programming One of the key insights behind modern neural networks is the idea that many copies of one neuron can be used in a neural network. http://colah.github.io/posts/2015-09-NN-Types-FP/
The original purpose of AI was to mimic human intelligence, at least that was the impression I got from sitting in Geoffrey Hinton's class on AI a long time ago. But now it is almost certain that one can do much better than that with machine learning, namely one can use machine learning to discover trade ideas that will be hard to find by just using human mind. This is incredible.
Just curious. Would you, or someone, please give an example or two of such trade ideas that machine learning could produce which a human mind couldn't. Please don't go into detail regarding the process as machine learning is all Greek to me. Just the resulting trade idea would be interesting to see, or even what it tries to do that a human couldn't. In other words if the human mind can identify a trend and find an advantageous point at which to enter within that trend, I would be interested to see how machine learning or anything else could top that. Thank you.
The problem with machine learning in trading is how to reduce it to practice without doing trivial classification of known indicators and patterns being used by many noise traders. A GP engine can be used to evolve systems based on these indicators but the results will always be questionable and actually they tend to fail. One should start with bare bones rules of arithmetic/Boolean algebra and see what patterns can emerge that are not trivially identified by noise traders, other wise there is no edge. I have seen two commercial programs that claim to do that: Trading System Lab and DLPAL. The price of the first targets funds with deep pockets. the second is way more affordable and also supports the discretionary EOD trader. Both programs are good starting points away from known indicators and the noise. Both do not assume any knowledge about indicators which is important in reducing over-fitting to noise. I hope this information helps.
https://research.googleblog.com/2016/08/text-summarization-with-tensorflow.html Text summarization with TensorFlow Wednesday, August 24, 2016 Posted by Peter Liu and Xin Pan, Software Engineers, Google Brain Team Every day, people rely on a wide variety of sources to stay informed -- from news stories to social media posts to search results. Being able to develop Machine Learning models that can automatically deliver accurate summaries of longer text can be useful for digesting such large amounts of information in a compressed form, and is a long-term goal of the Google Brain team. Summarization can also serve as an interesting reading comprehension test for machines. To summarize well, machine learning models need to be able to comprehend documents and distill the important information, tasks which are highly challenging for computers, especially as the length of a document increases. In an effort to push this research forward, we’re open-sourcing TensorFlow model code for the task of generating news headlines on Annotated English Gigaword, a dataset often used in summarization research. We also specify the hyper-parameters in the documentation that achieve better than published state-of-the-art on the most commonly used metric as of the time of writing. Below we also provide samples generated by the model. Extractive and Abstractive summarization One approach to summarization is to extract parts of the document that are deemed interesting by some metric (for example, inverse-document frequency) and join them to form a summary. Algorithms of this flavor are called extractive summarization. Original Text: Alice and Bob took the train to visit the zoo. They saw a baby giraffe, a lion, and a flock of colorful tropical birds. Extractive Summary: Alice and Bob visit the zoo. saw a flock of birds. Above we extract the words bolded in the original text and concatenate them to form a summary. As we can see, sometimes the extractive constraint can make the summary awkward or grammatically strange. Another approach is to simply summarize as humans do, which is to not impose the extractive constraint and allow for rephrasings. This is called abstractive summarization. Abstractive summary: Alice and Bob visited the zoo and saw animals and birds. In this example, we used words not in the original text, maintaining more of the information in a similar amount of words. It’s clear we would prefer good abstractive summarizations, but how could an algorithm begin to do this? About the TensorFlow model It turns out for shorter texts, summarization can be learned end-to-end with a deep learning technique called sequence-to-sequence learning, similar to what makes Smart Reply for Inbox possible. In particular, we’re able to train such models to produce very good headlines for news articles. In this case, the model reads the article text and writes a suitable headline. To get an idea of what the model produces, you can take a look at some examples below. The first column shows the first sentence of a news article which is the model input, and the second column shows what headline the model has written. Input: Article 1st sentence Model-written headline metro-goldwyn-mayer reported a third-quarter net loss of dlrs 16 million due mainly to the effect of accounting rules adopted this year mgm reports 16 million net loss on higher revenue starting from july 1, the island province of hainan in southern china will implement strict market access control on all incoming livestock and animal products to prevent the possible spread of epidemic diseases hainan to curb spread of diseases australian wine exports hit a record 52.1 million liters worth 260 million dollars (143 million us) in september, the government statistics office reported on monday australian wine exports hit record high in september Future Research We’ve observed that due to the nature of news headlines, the model can generate good headlines from reading just a few sentences from the beginning of the article. Although this task serves as a nice proof-of-concept, we started looking at more difficult datasets where reading the entire document is necessary to produce good summaries. In those tasks training from scratch with this model architecture does not do as well as some other techniques we’re researching, but it serves as a baseline. We hope this release can also serve as a baseline for others in their summarization research.
Wow, this is bringing back memories. I started my academic career as an AI researcher (ok, I was just a kid grad student back then) in late 80s when "traditional" AI was all the rage. Anyone here remember the Fifth Generation Computer project? Look it up on Wikipedia if you don't know. I worked with Geoff Hinton (he supervised my thesis!) back when neuronets was considered one of "non-traditional" AI techniques. I belonged to the KR (Knowledge Representation) camp where we believed that Logic and Reasoning systems are the proper way to mimic Human intelligence, of course the whole community kinda of got stuck, and we all migrated to different things, I spent about 5 years working on pure Logic Theory and Theorem Provers. Problem was that, now thinking back, we were working on essentially Narrow AI problems (look up Narrow AI on Wikipedia if you wish). I developed Lisp (and Prolog and ... Poplog!) rule based systems that fundamentally functioned as domain-specific Intelligent systems (or sometimes referred to as Expert Systems). So there are different class of researchers, Hinton believes that we were working with the wrong toolset (he believes Learning, rather than Reflection, was the proper technique), you have Rodney Brooks @ MIT who believes we are far far away from anything resembling Human intelligence, that we should seek to mimic Insects instead, not surprisingly, he co-founded iRobot. I do use some form of Heuristics in my own trading system (for those who are interested), but it is a typical Narrow AI, where even though it has elements of AI (reasoning, chaining, and some form of distributed rules tree climbing, and believe me, there are moments when it looked very smart and agile), it is dumb as a door post even just faced with different product sets. Everything old is new again, Deep Learning was called Machine Learning Deep Layer, Big Data was called Case Based Reasoning (look it up) and Federated Databases, Cloud computing was called the Time-Sharing Computer.
instead of competition, human and machine learning can be complementary hybrid intelligence is the future ?! --- will that change the nature of trading? i don't know, maybe not if everyone has the same access to AI technology --- OOT: is it possible to upload our minds to "machine" neural network? how far are we?
Is it currently even known to scientists or general public how brian processes or preserves data/experiences?