Very generic responses when I tried to ask some transformer specific inferencing questions. And often times off point. Not sure how this is different from gpt 3.5