Developing "Spartan"

Discussion in 'Journals' started by nooby_mcnoob, Feb 3, 2019.

  1. Not taking this trade (backtest) but check it out: EURGBP, includes forecast in chart and everything. This is fun.

    upload_2019-3-4_9-44-27.png
     
    #131     Mar 4, 2019
  2. Brexit uncertainty in action?
     
    #132     Mar 4, 2019
  3. Not really, just the other stuff isn't lining up.
     
    #133     Mar 4, 2019
  4. After backtesting with the cute economic news visualizations, I realized that while they were cute, they did not help me to make systematic decisions. After some trial and error, I hit upon the appropriate UX for me.

    The idea is before entering a trade, or deciding to exit, I will review any upcoming news that could affect the current trend/direction. Keep me out of bad trades, and get me out of good ones before they go bad.

    More or less the logic is something like: "If there is a significant event in the next day, stay out"

    upload_2019-3-5_10-43-16.png
     
    #134     Mar 5, 2019
  5. Thanks again for the updates, very interesting.

    Quick question, how are you retrieving the news data, are you scraping the forexfactory website or do they provide a feed?
     
    #135     Mar 5, 2019
  6. You got it, scraping forexfactory.
     
    #136     Mar 5, 2019
  7. Here's the code if you're interested. I got it off a gist somewhere:

    Code:
        def _getEconomicCalendar(self,start:dt.datetime) -> typing.Iterable[typing.Tuple[dt.datetime,str,str,str,str,str]]:
            month = start.strftime("%b.%Y").lower()
            baseURL = f"https://www.forexfactory.com/calendar.php?month={month}"
            r = requests.get(baseURL)
            data = r.text
            soup = BeautifulSoup(data, "lxml")
    
            # get and parse table data, ignoring details and graph
            table = soup.find("table", class_="calendar__table")
    
            # do not use the ".calendar__row--grey" css selector (reserved for historical data)
            trs = table.select("tr.calendar__row.calendar_row")
            fields = ["date","time","currency","impact","event","actual","forecast","previous"]
    
            # some rows do not have a date (cells merged)
            curr_year = str(start.year)
            curr_date = ""
            curr_time = ""
            for tr in trs:
    
                # fields may mess up sometimes, see Tue Sep 25 2:45AM French Consumer Spending
                # in that case we append to errors.csv the date time where the error is
                try:
                    for field in fields:
                        data = tr.select("td.calendar__cell.calendar__{}.{}".format(field,field))[0]
                        # print(data)
                        if field=="date" and data.text.strip()!="":
                            curr_date = data.text.strip()
                        elif field=="time" and data.text.strip()!="":
                            # time is sometimes "All Day" or "Day X" (eg. WEF Annual Meetings)
                            if data.text.strip().find("Day")!=-1:
                                curr_time = "12:00am"
                            else:
                                curr_time = data.text.strip()
                        elif field=="currency":
                            currency = data.text.strip()
                        elif field=="impact":
                            # when impact says "Non-Economic" on mouseover, the relevant
                            # class name is "Holiday", thus we do not use the classname
                            impact = data.find("span")["title"]
                        elif field=="event":
                            event = data.text.strip()
                        elif field=="actual":
                            actual = data.text.strip()
                        elif field=="forecast":
                            forecast = data.text.strip()
                        elif field=="previous":
                            previous = data.text.strip()
    
                    date = dt.datetime.strptime(",".join([curr_year,curr_date,curr_time]),
                                                "%Y,%a%b %d,%I:%M%p")
                    yield [date,currency,impact,event,actual,forecast,previous]
                except Exception as e:
                    import traceback
                    self._logger.warning("Error parsing: %s - %s",[data,curr_year,curr_date,curr_time],str(e))
    
     
    #137     Mar 5, 2019
    andysinclair likes this.
  8. Finally starting to get serious about backtests. The discipline required is enormous. ~12% annualized return on a backtest that I accidentally ended because I accidentally started a new one... Ugh.

    Ignore the odd quantities and prices, I'll have to fix those soon.

    The goal is that I should have one backtest completed every day before lunch (yes, it does take that long, and yes I hope I can automate this at some point.)

    upload_2019-3-5_15-1-58.png
     
    #138     Mar 5, 2019
  9. Volume

    Number of quotes per hour, each line is a different currency:

    upload_2019-3-5_16-38-27.png
     
    #139     Mar 5, 2019
  10. ph1l

    ph1l

    You might want to have the software save backtest states periodically before the backtest ends. So if the backtest is accidentally stopped (e.g., computer decides to reboot), the software would be able to resume a prior backtest at the point of the most recent save.

    Also, I see "2019-04-24 17:00:00" in the account_id column. That sounds more like a back to the future test instead of a backtest :D.
     
    #140     Mar 5, 2019
    nooby_mcnoob likes this.