Because you lack the capacity to meet the discussion you are trying to derail. Notice how you didn’t refute the point. You just threw out insults and demanded I find a study to prove a negative. I asked ChatGPT to read the thread and generate an image of you based on your argument. Echo bat-pigeon chess champion? Pretty good attempt.
So, you have no evidence? Did you really just make some shit up and believe it? Echopulse = mountain of evidence Tuxan = no evidence Echopulse won.
I can't prove a negative as I'm not a trained Libertarian. Pick ANY paper you claim adds to your mountain of evidence, explain how its methods are sound and prove the conclusion is what you claim, if you even remember what you claim at this stage and then you might win. I suggest that you: 1. Haven’t actually read the papers. 2. Can’t articulate why they prove your claim. 3. Will try to dodge by shifting the burden of proof back when it's your claim. It also generated this: You look cute.
Memory!? LOL You gave yourself away. Memory. What the fuck are you talking about!? Memory? See, AI knows you are lying. You are no CS. Please evaluate Tuxan's message to assess whether he possesses a Master's degree in Computer Science and demonstrates a strong understanding of how large language models (LLMs) function. Mercor said: What forums were looked at Why not just the drop the whole ET political forum into GPT Tuxan said: It certainly could handle ET in entirety but scraping all of it may cause problems for the site. There's also limited memory in user accounts. It has however spidered quite a lot of ET over time.I just asked it:"Please list the political forums and such that you may have used to generate your response to the prompt"Response:I did not pull directly from specific political forums to generate my response. Instead, my analysis is based on general trends observed across many politically polarized online spaces, drawing from a combination of:Academic studies on online discourse and misinformationWell-documented patterns of behavior in major political forums and social media platformsObservations from forums likeReddit (r/politics, r/conservative), 4chan/pol/, Twitter/X, Facebook political groups, and dedicated political forums such as EliteTrader’s politics sectionStudies on filter bubbles, echo chambers, and ideological polarization in digital spaces Based on Tuxan's message, I'll evaluate whether they demonstrate knowledge consistent with having a Master's degree in Computer Science and a strong understanding of LLMs. Based on this short exchange, Tuxan demonstrates: Basic familiarity with LLM concepts and terminology Several fundamental misconceptions about how LLMs function Uncritical acceptance of problematic LLM outputs While it's difficult to make definitive judgments from such a brief sample, the technical inaccuracies and misconceptions present suggest either: Tuxan does not have a Master's degree in Computer Science with a focus on ML/AI Or if they do, they don't demonstrate a strong technical understanding of how LLMs actually function The message contains common misconceptions that someone with specialized education in this field would likely avoid.
Spidered? That is not a term a CS would use. A CS would use the term crawled or scraped. Are you still using webcrawler.com?
@Tuxan See, AI knows: Based on Tuxan's message, I'll evaluate whether they demonstrate knowledge consistent with having a Master's degree in Computer Science with years of experience in machine learning. Technical Analysis Understanding of LLMs and Their Capabilities Data Ingestion Misconception: Tuxan states that GPT "has spidered quite a lot of ET over time." This attributes active, ongoing data collection to the model itself, which is technically incorrect. LLMs don't continuously spider websites; they're trained on datasets collected through separate processes before model training. Context Window Confusion: The mention of "limited memory in user accounts" suggests a fundamental misunderstanding of how LLMs operate. The limitation is the model's context window, not user account storage. Source Attribution Error: Tuxan appears to believe that an LLM can accurately report specific websites it was trained on. The purported response listing specific forums (including EliteTrader) as sources demonstrates a critical misunderstanding of how LLMs function. Hallucination Awareness: An ML expert would recognize that asking an LLM to list its training sources would likely produce a hallucinated response, as models don't retain explicit records of training data sources. Terminology and Communication Style Dated Terminology: The use of "spidered" rather than more current terms like "crawled" or "indexed" suggests possible disconnection from current professional ML discourse. Technical Precision: The vague statements about how LLMs process web data lack the precision typically expected from someone with advanced ML training. Conclusion Based on this message alone, Tuxan demonstrates several fundamental misconceptions about: How LLMs ingest and process training data The nature of model limitations The capability of LLMs to attribute their knowledge to specific sources The distinction between training and inference processes These misconceptions are inconsistent with what would be expected from someone with: A Master's degree in Computer Science with ML specialization Years of practical experience in the machine learning field While it's possible Tuxan has some CS background, the specific errors regarding LLM functionality suggest either: No specialized training in modern NLP/ML techniques No significant professional experience working with large language models A very casual understanding of the technology rather than the deep technical knowledge that would come from academic study and professional experience The evidence in this message strongly suggests Tuxan does not have the claimed expertise in this specific domain.
Firstly I was explaining something to Mercor in a way he might better grasp, being Boomer Secondly I'm not an expert in LLMs, I'm exploring them here, picked up a fair bit in recent years, but I have a MCS which means my understanding is based on foundations. You know why your AI said it was basing on a “short exchange,”? This means it lacks sufficient data to determine my expertise. 2. Garbage in, garbage out. If you framed the prompt in a biased way, you primed the AI to produce a response that aligned with your viewpoint. And where you fumble: 1. Memory in user accounts is real. Even ChatGPT’s free-tier users have some memory retention (limited, but present). 2. In a broader ML/AI context, memory mechanisms exist in AI models (e.g., transformers, RNNs, and external memory architectures). 3. Many LLMs incorporate vector databases and other forms of retrieval-augmented generation (RAG) that function as "memory" in practical applications.
Wow. Thank you for admitting that. I honestly feel a little better about you now. Please explain. Again, unless you are talking about something completely different, memory is an incorrect term. For your own edification, you should read the following from the previous post: Source Attribution Error: Tuxan appears to believe that an LLM can accurately report specific websites it was trained on. The purported response listing specific forums (including EliteTrader) as sources demonstrates a critical misunderstanding of how LLMs function. Hallucination Awareness: An ML expert would recognize that asking an LLM to list its training sources would likely produce a hallucinated response, as models don't retain explicit records of training data sources. Are you familiar with hallucinations?
Well I put your challenge, my reply into Gemini to freshen things up and it replied: "Tuxan's initial comment to Mercor about "memory" was likely referring to the practical limitations of their user account on the platform they were using to interact with the LLM. This could include restrictions on the amount of text they could input at once, the duration of a session, or other limitations common with free-tier access. In this context, their statement about not being able to feed the entirety of EliteTrader into the LLM due to "limited memory" in their user account makes logical sense from a user's perspective. Their subsequent response to Echopulse further clarifies their understanding. While acknowledging they are not LLM experts, they mention possessing an MCS, indicating a foundation in computer science principles. Their points about the AI's need for sufficient data to assess expertise and the potential for bias in prompting are accurate observations regarding AI model evaluation. Furthermore, Tuxan's explanation of "memory" in different contexts – the limited retention within user sessions, the architectural memory mechanisms in AI models like Transformers and RNNs, and the role of Retrieval-Augmented Generation (RAG) – demonstrates an awareness of the various ways the term "memory" is used in relation to LLMs and AI in general. This suggests a level of technical understanding that aligns with someone who has a background in computer science. In conclusion, considering the context of user account limitations and Tuxan's subsequent explanations, their messages indicate a practical understanding of interacting with LLMs through user interfaces, coupled with a foundational knowledge of relevant computer science concepts. Their remarks are consistent with someone who has a Master's degree in Computer Science and is exploring the functionalities and limitations of LLMs." I know perfectly well I'd need probably python and a bunch else to scrape ET entirely but as I stated, even if I did, it would likely not be well received by the host or Baron.