(continued from Part II)
One thing Formula 3 (F3) was designed to help with is the quantitative transformation of time series. If you had to compute the mean values over an 1-hour interval of, say, the ticket sales then the following TSP operator takes care of it:
every 10 minutes
(continued from Part I)
Last time I implicitly proposed to think some parts of a (geosemantic) application in terms of time series. This is not so farfetched, consider for instance a semantically enabled tourism application for, say, Vienna.
Sure, there are a number of very static things you would store into the semantic network:
- the sites, churches, cathedral, churches, and even more churches,
- the museums, galleries, museums and even more museums,
- the tourism ontology, containing buildings, museums, and yes, the churches.
But even if this is Vienna, not everything is static: There are (insane) traffic conditions, (predominantly italian and spanish) tourists roaming through the city, concert tickets sold at the weirdest places. All these are perfect candidates to be packed into time series.
When people design semantic systems, then a typical architecture looks something like this:
- An RDF tuple (or Topic Maps) store for the odd and irregular data, and
- some relational DB, either imported into the semantic store, or wrapped, or linked via a message bus (MQ, events, ...).
- Some more or less sophisticated integration, and
- the user interface on top of it.
Now this is all well and good for your middle-of-the-road semantic portal, but the class of applications I have in mind have one thing in common:
Data dynamics, with temporal and geospatial aspects. And that with physical units.