TimesFM: Time Series Foundation Model for time-series forecasting

12 days ago (github.com)

I'm curious why we seem convinced that this is a task that is possible or something worthy of investigation.

I've worked on language models since 2018, even then it was obvious why language was a useful and transferable task. I do not at all feel the same way about general univariate time series that could have any underlying process.

  • Time series data are inherently context sensitive, unlike natural languages which follow predictable grammar patterns. The patterns in time series data vary based on context. For example, flight data often show seasonal trends, while electric signals depend on the type of sensor used. There's also data that appear random, like stock data, though firms like Rentech manage to consistently find unlerlying alphas. Training a multivariate time series data would be challenging, but I don't see why not for specific applications.

    • Is Rentech the only group that genuinely manages to predict stock price? Seems like the very observation that it’s still possible would be enough motivation for other groups to catch up over such a long period.

      Also, the first realistic approximation of Solomonoff induction we achieve is going to be interesting because it will destroy the stock market.

      11 replies →

  • Why do you think language is so special?

    There's an extensive body of literature across numerous domains that demonstrates the benefits of Multi-Task Learning (MTL). Actually I have a whole folder of research papers on this topic, here's one of the earliest references on hand that I feel captures the idea succinctly in the context of modern ML:

    “MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks" [Caruana, 1998]

    I see repetition and structure everywhere in life. To me it's not far fetched that a model trained on daily or yearly trends could leverage that information in the context of e.g. biological signals which are influenced by circadian rhythm etc.

    Disclaimer: my background is in ML & bio-signals, I work with time series too much.

    • For those who haven't read it, Rich Caruana's thesis on multi-task learning is beautifully written (the cited 1998 paper here). It's amazing to see how far the field has come, and, at the same time, how advanced the thinking was in the 90s too.

  • The things that we are typically interested in have very clear patterns. In a way, if we find that there are no patterns, we don't even try to do any forecasting.

    "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" [1] hints that there might be some value here.

    [1] https://en.m.wikipedia.org/wiki/The_Unreasonable_Effectivene...

    • Exactly, so for example, I think the use of this model is in cases where you want user count to have some pattern around timing. And be alerted if it has spike.

      But you wouldn't want this model for file upload storage usage which only increases, where you would put alerts based on max values and not patterns/periodic values.

  • Why not? There are plenty of time series that have underlying patterns which means you can do better than a total guess even without any knowledge of what you are predicting.

    Think about something like traffic patterns. You probably won't predict higher traffic on game days, but predicting rush hour is going to be pretty trivial.

  • There is potential for integrating ML with time series data in industrial applications (things like smelters, reactors etc.), where you have continuous stream of time series measurements from things like gauges and thermocouples. If you can detect (and respond) to changing circumstances faster then a humans in control room reacting to trends or alarms then potential big efficiency gains...

    Operator guidance is often based on heuristics - when metric A exceeds X value for Y seconds take action Z. Or rates of change if the signal is changing at a rate of more than x etc.

    So in these areas there exists potential for ML solution, especially if it's capable of learning (i.e. last response overshot by X so trim next response appropriately).

    • Every time i've actually tried something like this it has not outperformed statistical process control.

      It's not just that control charts are great signal detectors, but also managing processes like that takes a certain statistical literacy one gets from applying SPC faithfully for a while, and does not get from tossing ML onto it and crossing fingers.

      3 replies →

  • Fundamentally, the pre-trained model would need to learn a "world model" to predict well in distinct domains. This should be possible not regarding compute requirements and the exact architecture.

    After all, the physical world (down to the subatomic level) is governed by physical laws. Ilya Sutskever from OpenAI stated that next-token prediction might be enough to learn a world model (see [1]). That would imply that a model learns a "world model" indirectly, which is even more unrealistic than learning the world model directly through pre-training on time-series data.

    [1] https://www.youtube.com/watch?v=YEUclZdj_Sc

    • But the data generating process could be literally anything. We are not constrained by physics in any real sense if we predicting financial markets or occurrences of a certain build error or termite behavior.

      1 reply →

    • "predicting the next token well means that you understand the underlying reality that led to the creation of that token"

      People on the AI-hype side of things tend to believe this, but I really fundamentally don't.

      It's become a philosophical debate at this point (what does it mean to "understand" something, etc.)

  • > I'm curious why we seem convinced that this is a task that is possible or something worthy of investigation.

    There's a huge industry around time series forecasting used for all kinds of things like engineering, finance, climate science, etc. and many of the modern ones incorporate some kind of machine learning because they deal with very high dimensional data. Given the very surprising success of LLMs in non-language fields, it seems reasonable that people would work on this.

  • +1 for “any underlying process”. It would be interesting what use case they had in mind.

  • well... if you look at a language in a certain way, it is just a way to put bits in a certain order. if you forget about the 'language' part, it kinda makes sense to try because why shouldn't it work?

  • I think there are some generalizable notions of multiscale periodicity that could get embedded into some kind of latent space.

  • as you say, without knowing anything about the underlying process, we can't predict generally. Some other comments point to contexts in which we do know something about the underlying. For instance, I don't think finance is something where you can apply this kind of stuff.

  • There was a paper written a while back that proved mathematically how you can correlate any time series with any other time series, thus vaporizing any perception of value gained by correlating time series (at least for those people that read the paper.) just wanted to share

  • Not really. It's true it would usually need more context than a single series dataset but you can predict broadly accurate-ish bandwidth usage trends just using simple statistical extrapolation, we've been doing that since the early 90s. If you give a model your subscriber numbers and usage data as time series it should be able to tell you quite accurately how much electricity|bandwidth|gas|road traffic levels| metro passenger levels at station Z... you'll be using at 4pm on January 4th 2026.

On a related note, Amazon also had a model for time series forecasting called Chronos.

https://github.com/amazon-science/chronos-forecasting

Seems like a pretty small (low latency) model. Would be interesting to hook up to mouse input (x and y) and see how well it predicts where I’m gonna move the mouse (maybe with and without seeing the predicted path)

"Time series" is such an over-subscribed term. What sorts of time series is this actually useful for?

For instance, will it be able to predict dynamics for a machine with thousands of sensors?

  • Specifically, its referring to univariate, contiguous point forecasts. Honestly, I'm a little puzzled by the benchmarks.

  • Even if it was for multivariate time series, the model would first need to infer what machine are we talking about, then its working conditions, and only then make a reasonable forecast based on an hypothesis of its dynamics. I don’t know, seems pretty hard.

    • Indeed. An issue I ran into over and over while doing research for semiconductor manufacturing.

      My complaint was more illustrative than earnest.

"Why would you even try to predict the weather if you know it's going to be wrong?"

- most OCs on this thread

  • I have a few qualms with this app: 1. For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.

    2. It doesn't actually replace a USB drive. Most people I know e-mail files to themselves or host them somewhere online to be able to perform presentations, but they still carry a USB drive in case there are connectivity problems. This does not solve the connectivity issue.

    3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the service, is it reasonable to expect to make money off of this?

Dear googler or meta-er or timeseries transformer startup something-er: Please make a ChatGPT/chat.lmsys.org style interface for one of these that I can throw data at and see what happens.

This one looks pretty easy to setup, in fairness, but some other models I've looked at have been surprisingly fiddly / locked behind an API.

Perhaps such a thing already exists somewhere?

It seems to me that predicting something based on time is rarely accurate and meaningful.

Suppose you want to buy stocks? Would you look on a time based graph and buy according to that? Or you rather look at financial data, see earnings, profits? Wouldn't a graph that has financial performance on x-axis be more meaningful that one that has time?

What if you research real estate in a particular area? Wouldn't be square footage a better measure than time?

  • > Would you look on a time based graph and buy according to that? Or you rather look at financial data, see earnings, profits?

    Things affecting financials happen through time.

    • All things happen through time, but my argument is that time might not be the best parameter to model relations.

Is anyone using neural networks for anomaly detection in observability? If so, which model and how many metrics are you supporting per core?

I'm not sure I understand two things here. Could someone clarify: 1. This is a foundation model, so you're expected to fine tune for your use case, right? (But readme doesn't mention tuning) 2. When submitting two series, do they impact each other in predictions?

How can time series model be pre-trained ? I think I’m missing something.

  • If you have a univariate series, just single values following each other -

    [5, 3, 3, 2, 2, 2, 1, …]

    What is the next number? Well let’s start with the search space - what is the possible range of the next number? Assuming unsigned 32bit integers (for explanation simplicity) it’s 0-(2^32-1)

    So are all of those possible outputs equally likely? The next number could be 1, or it could be 345,654,543 … are those outputs equally likely?

    Even though we know nothing about this sequence, most time series don’t make enormous random jumps, so no, they are not equally likely, 1 is the more likely of the two we discussed.

    Ok, so some patterns are more likely than others, let’s analyse lots and lots of time series data and see if we can build a generalised model that can be fine tuned or used as a feature extractor.

    Many time series datasets have repeating patterns, momentum, symmetries, all of these can be learned. Is it perfect? No, but what model is? And things don’t have to be perfect to be useful.

    There you go - that’s a pre-trained time series model in a nutshell

  • Third paragraph of the introduction of the mentioned paper[1] in the first paragraph of the repo.

    [1] https://arxiv.org/abs/2310.10688

    • I guess they pre-trained the model to exploit common patterns found in any time-series (e.g., seasonalities, trends, etc.)... What would be interesting, though, is to see if it spots patterns that are domain-specific (e.g., the ventricular systole dip in an electrocardiogram), and possibly transfer those (that would be obviously useless in this specific example, but maybe there are interesting domain transfers out there)

  • My understating is that, while your eye can naturally spot a dependency over time in time series data, machines can’t. So as we did for imaging, where we pre-trained models to let machines easily identify objects in pictures, now we are doing the same to let machines “see” dependencies over time. Then, how these dependencies work, this is another story.

Anyone have insights working with Ikigai’s “Large Graphical Model” and how well it does on time-series? It’s proprietary, but I’m curious how well it performs.

is it better than prophet from meta?

  • I imagine they're both worse than good old exponential smoothing or SARIMAX.

    • Depends on use case. Hybrid approaches have been dominating the M-Competitions, but there are generally small percentage differences in variance of statistical models vs machine learning models.

      And exponentially higher cost for ML models.

      4 replies →

  • Prophet was pretty bad so yes, but it doesn't seem much better than ARIMA

    • what about neuralprophet came after prophet? some companies like mixpanel mentioned in their documentation that they are using prophet for forecasting/anomaly detection

When it comes to time series forecasting, if the method actually works, it sure as hell isn't being publicly released.

  • Some times series are more predictable than others. Being good at predicting the predictable ones is useful.

    For example you can easily predict the weather with descent accuracy. Tomorrow is going to be about the same than today. From there you can work on better models.

    Or predicting a failure in a factory because a vibration pattern on an industrial machine always ended up in a massive failure after a few days.

    But I agree that if a model is good at predicting the stock market, it’s not going to be released.

  • and yet we have those huge llamas publicly available. these are computers that talk, dammit