So you start this thread by publicly announcing and promoting the start of activities. And then, in a follow-up message, mention that new customers cannot yet be accepted. I would consider that bad timing. You'd better wait promoting your company until you can actually handle new customers. By the time you are able to accept new customers they may have forgotten about you.
To preface, we haven't used CQG so we don't understand their offerings particularly well. However, we're very familiar with other data feeds catering to large institutions ranging in the $1.5k-$6k MRC per venue price range and have intentionally built up a superior stack to replace those feeds. For example, most vendors have instability or high WAN link latency because they depend solely on a single (often the cheapest) bandwidth provider. You'll see this if you traceroute some of our competitors' API gateways and it always takes a hop through Cogent or HE.net before it terminates on their gateway. If you traceroute the FQDN for our historical API gateway (hist.databento.com), you'll most likely see any of the diverse routes through all of the tier 1 bandwidth providers (e.g. Lumen, Arelion, NTT) and private peering routes (e.g. Google Cloud, Verizon, Comcast) that we've aggregated. We're sparing no expense to build the most performant feed and so our primary purpose is not to compete with low cost providers. Typically, a solution like ours would be cost-prohibitive to the individual trader, but by metering our fees, anyone can use it - just like how you can spin up thousands of cores on AWS for dollars. In any case, if you love any other vendor's offering, you don't have to change that! That's also one awesome thing about our pay-as-you-go model. You can simply use us as a backup feed or data source, and you'd have zero cost commitment. And it's quite likely that any of your real-time licenses with the market through your original vendor already concurrently covers our use case, so you wouldn't have to pay additional real-time license fees. Some of our beta users are relying on us in this manner.
Will you support selective download of certain data elements from larger feeds? For example, auction trades and imbalances. Will you support raw (non normalized) data?
I probably forgot to add another possible reason, which is that some brokers are afraid the competition undercuts them re pricing when the data is publicly shown. However, any clever competitor can obtain pricing data of competitors without even minimal efforts. So, unless other reasons escape me, there is not need for a phone call besides the already mentioned.
I might get a spanking for this, but that instability comment, I just had to chime in. It's FUNNY! Sorry, carry on.
I don't think they have thought about any of this. I signed up for email updates and was informed they got basically almost nothing in place and was just moved into another wait queue. I think they made a premature announcement and now need to stir back. A little embarrassing to advertise services and then being unable to offer anything at all
Yes, we're in the midst of adding message schemas for imbalance, RPI and cross-trades as part of our recent integration of the NYSE Integrated prop feeds. We won't be providing raw data over internet. The reason being that it's impractical to distribute raw multicast data over sparse WAN links. Our normalization, compression and proprietary binary encoding cuts down the size of the raw feed by at least a factor of 20x and is the only reason we're able to distribute full market feeds over internet. We do normalize the data in a way that preserves some of the packet-level information and practically all of the payload-level information though, to the point that it is possible to simulate a passive market making strategy with very high fidelity. Our distribution licenses do allow us to distribute raw data though so we plan to offer that next year when we focus on our colo offerings.