Marco, I am not pushing for it as a new function. There are more popular/useful technical things to spend time on!
Here is the info:
$META broke a record for consecutive up days.
We just had a massive correction in the market, of course.
I was seeking out stocks that have been coming back nicely from the sell-off.
Stocks increasing steadily after the pullback are worth looking at and learning from.
So I thought it would be a good challenge for the GPT helper you made.
It helped me create a nested eval to check 10 days back of consecutive gains.
We've had other requests for "streak counting". So I think it's worthwhile to add a function. Here's the new function I'm whipping up. Should be available by next week.
Counts the number of consecutive times the formula evaluates to the streak type specified. Could be either the latest streak or the longest streak in the period.
Parameters:
streak: #positive, #negative, #increasing, #decreasing
The last two compare the values at iteration CTR with CTR+1
recent: if FALSE it evaluates all iterations and finds longest streak, otherwise returns the most recent streak.
Marco, you're a great chef when you can just whip up a function like this. I will put it to use on day 1.
Regarding Loop functions, do you have a best-practice method for finding out WHEN a loop has its max, min, etc? (Date, Bars, weeks, etc all fine.). Right now I use a 2nd loop function and an eval statement.
(GPT was not yet helpful with this. In fact, it had a few delusions, making up a "break" command!). I'm sure it will learn/improve with time.
ChatGPT is already doing something quite close to what youâre suggestingâjust in a different context.
It now uses memory to retain and recall details from past conversations with a user, even when the total token count exceeds the standard context window. This allows it to reference relevant information across a much broader span of interactionâfunctionally similar to being able to search extended forum history.
While Iâm not sure about the technical implementation specifics for P123, your idea is clearly on point âand OpenAIâs approach shows just how powerful that kind of continuity and long-term context can be.
Side note/question: LLMs are trained on large amounts of public data. Since much of the P123 site is publicly accessible and can be scraped, I assume ChatGPT may already âknowâ quite a bit about P123âeven without fine-tuning.
Final note: Iâve found myself discussing advanced AI tools less frequently hereâbecause I suspect my ideas are more likely to be adopted outside of P123 than within it. In some cases, they may prove more useful to potential competitors than as features formally integrated into P123âwhich may not benefit the existing user base in any way..
LLMs may ultimately become one of the primary ways P123âs intellectual value is absorbed beyond its own platformâwhich isnât necessarily a good thing if the goal is to encourage open discussion of methods on the forumâparticularly when users are actively being asked for use cases for their feature suggestions.
@Jrinne Thanks for the detailed reply. I completely agree with ChatGPT's memory feature - it is very useful and is the main reason I tend to use ChatGPT more than any other LLM these days - it simply "knows" me and doesn't need all the background repeated.
However, the case for fine-tuning on the P123 forum is different from the memory feature. The latter is more user-specific. Whereas the former would benefit the entire P123 community.
Even though a base model (like say DeepSeek) has seen parts of the P123 forum during pre-training, that exposure is quite passive. Fine-tuning on the forum + other documentation gives the model more emphasis and structure to better understand the P123 ecosystem - logic, workflow, terminology and even long standing debates.
It will likely help generate better, high-quality context aware answers than a generic LLM.
I think it would be a huge help with onboarding new users (like myself) and potentially reduce the "discouragement and frustration" Marco referred to above.
PS - I found this video from Andrej Karpathy extremely helpful to understand how LLM's work - sharing incase anyone hasn't come across it before. Well worth the 3.5hrs!
I agree. I was only making the point that the forum CAN be made searchable (which it is not now). Basically restating your great idea. Just adding that your idea is technically feasible by giving an example where something that is technically similar has already been done. Just saying what you suggest is technically feasible for sure as the technical method is already being used on a large scale (by ChatGPT).