[time-nuts] Archiving Timing Data
scmcgrath at gmail.com
scmcgrath at gmail.com
Mon Jan 10 21:57:08 UTC 2011
The counter argument is with a heavyweight database - the size of the datastore increases dramatically and there is no guarantee that the tool will be around in 10 years to read the data.
All SQL databases use ASCII format CSV to load and dump the data from their internal data representation.
Transactional systems still use a hierarchical database 'think IBM IMS or RAIMA' to store and access large datasets like CC auth. These databases are one step away from ASCII or EBCDIC
Sent from my Verizon Wireless BlackBerry
From: Chris Albertson <albertson.chris at gmail.com>
Sender: time-nuts-bounces at febo.com
Date: Mon, 10 Jan 2011 12:42:03
To: Discussion of precise time and frequency measurement<time-nuts at febo.com>
Reply-To: Discussion of precise time and frequency measurement
<time-nuts at febo.com>
Subject: Re: [time-nuts] Archiving Timing Data
We have mountains of data here too. The best why to store it is in a
"real" database of some kind. There are several that are free, open
source and multi-platform. The best for this use is "Postgres". As
this is free and open source there is no reason not to use it.
In the past I've kept snapshots for simulations that have run for
hours/days/weeks and we got many hundreds of millions of data points.
Then we are able to query for almost any conditions and expression,
for example "Give me a A, B where A-B less than 4 from July 5th 1998"
I can tell you first hand that having a billion lines of tab separated
data is worse than useless. You need itcataloged such that you can
very quickly (seconds) find useful subsets of the data and you can
NEVER know in advance what subset you might need.
On Mon, Jan 10, 2011 at 12:22 PM, Peter Vince <pvince at theiet.org> wrote:
> Would a TSB (Tab Separated Value) format be preferable? Full-stops
> and commas are used in numbers as decimal and thousands separators (or
> vice versa), so using tab character would avoid any problems with
> commas in the actual data (and make it is a bit easier to quickly
> eyeball when viewed in a text editor).
> Peter (G8ZZR, London, England)
> On 9 January 2011 17:15, Bob Camp <lists at rtty.us> wrote:
>> I doubt very much I'm the only one taking a mountain of timing data and not properly cataloging it. My guess is that maybe > 90% of the list members are in the same boat. How about:
>> 1) A set of not to restrictive data format standards (CSV with a few restrictions ...)
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
Redondo Beach, California
time-nuts mailing list -- time-nuts at febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
More information about the time-nuts